RUSSIAN |
MSU Perceptual Video Quality tool
Frequently Asked Questions
MSU Graphics & Media Lab (Video Group)
Project, ideas: Dr. Dmitriy Vatolin
Implementation: Oleg Petrov
List of the Frequently Asked Questions
- [0] What is this page about?
- [1] What is this tool for?
- [2] What does "subjective video quality" mean?
- [3] How should I measure "subjective video quality"?
- [4] Which subjective testing methodology is the most popular?
- [5] What "task file" is?
- [6] What video formats are supported in this tool?
- [7] How expert compares video?
- [8] What is stored in file with results of an expert?
- [9] "Task file" contains different versions of a single video sequence. But usually a number of different sequences are used for the comparison; how can I handle it?
- [10] Ok, experts went through task; how can I count average results?
- [11] What is the range of marks?
Answers to Frequently Asked Questions
Q: What is this page about?
A: This page is a list of Frequently Asked Questions about MSU Perceptual Video Quality tool.
A: This tool aids to make subjective video quality evaluation.
Q: What does "subjective video quality" mean?
A: You can measure video quality using a mathematical formula, like PSNR or more complex VQM and SSIM. These methods are implemented in our MSU Video Measurement tool. It is "objective video quality". But results of objective measurements do not always correlate with subjective impressions of a person, so the only way to predict users' opinion is to ask them! Video quality marks that are given by experts are named "subjective video quality".
Q: How should I measure "subjective video quality"?
A: If you want to get reliable results you should follow recommendations on subjective assessments (ITU-T BT.500), but simplified actions are:
- Choose video sequences for testing (they are often named SRC).
- Choose settings of systems that you want to compare (often named HRC).
- Choose test methodology (how sequences are presented to experts and how their opinion is collected).
- Invite sufficient number of experts (not less than 15 are recommended).
- Calculate average marks for each HRC based on their opinion.
Q: Which subjective testing methodology is the most popular?
A: DSCQS type 2 is used quite often (for instance, "Subjective Quality Assessment of The Emerging AVC/H.264 Coding Standard"), SAMVIQ ("Subjective Quality of Internet Video Codecs", our "Subjective Comparison of Video Codecs") and DSIS.
A: "Task file" is created by "MSU PVQ - task manager". Task is a collection of video files + information about how they will be shown to experts. All video files in task are supposed to be different versions of one video sequence. In some test methods you must choose "task reference"; it is supposed to be unimpaired (uncompressed) version of the sequence.
Q: What video formats are supported in this tool?
A: This tool supports .avi format, including all sorts of compressed video files, and .avs (AviSynth scripts).
Warning: to play video files, you should have all appropriate codecs installed!
Through AviSynth you can use lots of other video formats. For more information go to
AviSynth page.
A: Expert runs "MSU perceptual video quality player", inputs his or her name and opens task file (you can simplify it; see following question). When expert finishes task, subfolder with name of task is created in folder that contains task file. Results from all experts are stored there. Results for particular expert are stored in file "expert_name".csv.
Q: What is stored in file with results of an expert?
A: Results for expert are stored in a following format:
task type, | [string with type of task] | |
average framerate, | [0|1] | //is "average framerate" option enabled |
pause allowed, | [0|1] | |
rewind allowed, | [0|1] | |
one to each, | [0|1] | |
number of tests, | [number of tests] | |
number of videos, | [number of videos] | |
reference video, | [reference video name] | //only if task is "one to each" |
video, | mark | |
[video name], | [mark] | //meaning of mark depends on test methodology |
..., | ... | |
screen resolution, | [resolution values] | |
video, | decompressor | |
[video name], | [decompressor name] | |
..., | ... | |
time of assessment, | [time] |
A: You can create .tsk file for each sequence that you use for comparison. Then you can create .bat file with following text:
"MSU perceptual video quality player.exe" "c:\tasks\task1.tsk" "c:\tasks\task2.tsk" ...
When expert runs this file, he ought to go through all this tasks (he doesn't have to open them).
Q: Ok, experts went through task; how can I count average results?
A: Results for all experts are stored in subfolder with name of task in folder that contains task file. To count average results, open task file and press button "count results". Results will be saved in file "average mark.csv" in folder with experts' results.
Q: What is the range of marks?
A: In file with results of the particular expert, range of marks depends on the test methodology. In average results, all marks are from 0 to 10, the higher the better.
Download
MSU Video Quality Measurement Tools
e-mail: |
Other resources
Video resources:
Server size: 8069 files, 1215Mb (Server statistics)
Project updated by
Server Team and
MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab