MSU Perceptual Video Quality tool

Frequently Asked Questions

MSU Graphics & Media Lab (Video Group)

Project, ideas: Dr. Dmitriy Vatolin
Implementation: Oleg Petrov

List of the Frequently Asked Questions

Answers to Frequently Asked Questions

Q: What is this page about?

A: This page is a list of Frequently Asked Questions about MSU Perceptual Video Quality tool.

Q: What is this tool for?

A: This tool aids to make subjective video quality evaluation.

Q: What does "subjective video quality" mean?

A: You can measure video quality using a mathematical formula, like PSNR or more complex VQM and SSIM. These methods are implemented in our MSU Video Measurement tool. It is "objective video quality". But results of objective measurements do not always correlate with subjective impressions of a person, so the only way to predict users' opinion is to ask them! Video quality marks that are given by experts are named "subjective video quality".

Q: How should I measure "subjective video quality"?

A: If you want to get reliable results you should follow recommendations on subjective assessments (ITU-T BT.500), but simplified actions are:

  1. Choose video sequences for testing (they are often named SRC).
  2. Choose settings of systems that you want to compare (often named HRC).
  3. Choose test methodology (how sequences are presented to experts and how their opinion is collected).
  4. Invite sufficient number of experts (not less than 15 are recommended).
  5. Calculate average marks for each HRC based on their opinion.
Key point is the test methodology that you will choose for your comparison. There are five methods implemented in our tool, each with its advantages and disadvantages.

Q: Which subjective testing methodology is the most popular?

A: DSCQS type 2 is used quite often (for instance, "Subjective Quality Assessment of The Emerging AVC/H.264 Coding Standard"), SAMVIQ ("Subjective Quality of Internet Video Codecs", our "Subjective Comparison of Video Codecs") and DSIS.

Q: What "task file" is?

A: "Task file" is created by "MSU PVQ - task manager". Task is a collection of video files + information about how they will be shown to experts. All video files in task are supposed to be different versions of one video sequence. In some test methods you must choose "task reference"; it is supposed to be unimpaired (uncompressed) version of the sequence.

Q: What video formats are supported in this tool?

A: This tool supports .avi format, including all sorts of compressed video files, and .avs (AviSynth scripts). Warning: to play video files, you should have all appropriate codecs installed!
Through AviSynth you can use lots of other video formats. For more information go to AviSynth page.

Q: How expert compares video?

A: Expert runs "MSU perceptual video quality player", inputs his or her name and opens task file (you can simplify it; see following question). When expert finishes task, subfolder with name of task is created in folder that contains task file. Results from all experts are stored there. Results for particular expert are stored in file "expert_name".csv.

Q: What is stored in file with results of an expert?

A: Results for expert are stored in a following format:

task type,[string with type of task]
average framerate,[0|1]//is "average framerate" option enabled
pause allowed,[0|1]
rewind allowed,[0|1]
one to each,[0|1]
number of tests,[number of tests]
number of videos,[number of videos]
reference video,[reference video name]//only if task is "one to each"
[video name],[mark]//meaning of mark depends on test methodology
screen resolution,[resolution values]
[video name],[decompressor name]
time of assessment,[time]

Q: "Task file" contains different versions of a single video sequence. But usually a number of different sequences are used for the comparison; how can I handle it?

A: You can create .tsk file for each sequence that you use for comparison. Then you can create .bat file with following text:

"MSU perceptual video quality player.exe" "c:\tasks\task1.tsk" "c:\tasks\task2.tsk" ...

When expert runs this file, he ought to go through all this tasks (he doesn't have to open them).

Q: Ok, experts went through task; how can I count average results?

A: Results for all experts are stored in subfolder with name of task in folder that contains task file. To count average results, open task file and press button "count results". Results will be saved in file "average mark.csv" in folder with experts' results.

Q: What is the range of marks?

A: In file with results of the particular expert, range of marks depends on the test methodology. In average results, all marks are from 0 to 10, the higher the better.


MSU Video Quality Measurement Tools


Other resources

Video resources:

Last updated: 12-May-2022

Server size: 8069 files, 1215Mb (Server statistics)

Project updated by
Server Team and MSU Video Group

Project sponsored by YUVsoft Corp.

Project supported by MSU Graphics & Media Lab