RUSSIAN 
MSU Quality Measurement Tool: Metrics information
MSU Graphics & Media Lab (Video Group)
 MSU VQMT Information
 Basic Information About VQMT
 Setting & Screenshots (advanced files settings, visualization settings)
 Metrics Info (PSNR, MSE, MSAD, SSIM, VQM, MSU Blurring/Blocking)
 FAQ (Frequently Asked Questions)
 Plugins & Plugin SDK[BIPSNR, DFM, BFM, NE, SCD]
 MSU VQMT program performance (New!)
 Subjective quality metrics comparison (New!)
 PRO Version Info (with commandline interface) [Full Documentation, PDF, 2.98 MB]
 Download & Purchase [Brochure, PDF, 475kB]
Metrics Info
 PSNR
 MSAD
 Delta
 MSU Blurring Metric
 MSU Blocking Metric
 SSIM
 MultiScale SSIM
 3Component SSIM
 SpatioTemporal SSIM
 VQM
 MSE
 MSU Brightness Flicking Metric
 MSU Brightness Independent PSNR
 MSU Drop Frame Metric
 MSU Noise Estimation Metric
 MSU Scene Change Detector
PSNR
This metric, which is used often in practice, called peaktopeak signaltonoise ratio — PSNR.
where MaxErr  maximum possible absolute value of color components difference, w  video width,
h  video height. Generally, this metric is equivalent to Mean Square Error, but it is more convenient
to use because of logarithmic scale. It has the same disadvantages as the MSE metric.
In MSU VQMT you can calculate PSNR for all YUV and RGB components and for L component of LUV color space.
In MSU VQMT there are four PSNR implementations. "PSNR" and "APSNR" use the correct way of PSNR calculation
and take maximum possible absolute value of color difference as MaxErr. But this way of calculation
gives an unpleasant effect after color depth conversion. If color depth is simply increased from 8 to 16 bits,
the "PSNR" and "APSNR" will change, because MaxErr should change according to maximum possible absolute
value of color difference (255 for 8 bit components and 255 + 255/256 for 16 bit components). Thus "PSNR (256)"
and "APSNR (256)" are implemented. They would not change because they use upper boundary of color difference
as MaxErr. The upper boundary is 256. This approach is less correct but it is used often because it is fast.
Here are the rules of MaxErr definition:
 "PSNR" and "APSNR"  MaxErr varies on color components bits usage:
 255 for 8 bit components
 255 + 3/4 for 10 bit components
 255 + 63/64 for 14 bit components
 255 + 255/256 for 16 bit components
 100 for L component of LUV color space
 If bits depth differs for two compared videos, then maximum bits usage is taken to select MaxErr.
 All color space conversions are assumed to lead to 8 bit images. It means that if, for example, you are measuring RRGB PSNR for 14 bit YUV file, then 255 will be taken as MaxErr.
 "PSNR (256)" and "APSNR (256)"  MaxErr is selected according to the next rules:
 256 for YUV and RGB color spaces
 100 for L component of LUV color space
This way of average PSNR calculation is used in "PSNR" and "PSNR (256)". But sometimes it is needed to take
simple average of all the per frame PSNR values. "APSNR" and "APSNR (256)" are implemented for this case
and calculate average PSNR by simply averaging per frame PSNR values.
The next table summarizes the differences:



PSNR 
correct 
correct 
PSNR (256) 
256 (fast, inexact) 
correct 
APSNR 
correct 
averaging 
APSNR (256) 
256 (fast, inexact) 
averaging 
Source
Processed
YYUV PSNR
MSAD
The value of this metric is the mean absolute difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.
Source
Processed
MSAD
Delta
The value of this metric is the mean difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.
Source
Processed
Delta
MSU Blurring Metric
This metric allows you to compare power of blurring of two images. If value of the metric for first picture is greater, than for second it means that second picture is more blurred, than first one.
Source
Processed
MSU Blurring Metric
MSU Blocking Metric
This metric was created to measure subjective blocking effect in video sequence. For example, in contrast areas of the frame blocking is not appreciable, but in smooth areas these edges are conspicuous. This metric also contains heuristic method for detecting objects edges, which are placed to the edge of the block. In this case metric value is pulled down, it allows to measure blocking more precisely. We use information from previous frames to achieve better accuracy.
Source
MSU Blocking Metric
SSIM Index
SSIM Index is based on measuring of three components (luminance similarity,
contrast similarity and structural similarity) and combining them into result
value.
Original paper
Original
Compressed
SSIM (fast)
SSIM (precise)
There are 2 implementation of SSIM in our program: fast and precise. The fast one is equal to our previous
SSIM implementation. The difference is that the fast one uses box filter, while the precise one uses Gauss
blur.
Notes:
 Fast implementation visualization seems to be shifted. This effect originates from the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottomleft or upleft of the pixel (depending on if the image is bottomup or topdown).
 SSIM metric has 2 coefficients. They depend on the maximum value of the image color component. They are
calculated using the following equations:
 C1 = 0.01 * 0.01 * video1Max * video2Max
 C1 = 0.03 * 0.03 * video1Max * video2Max
 videoMax = 255 for 8 bit color components
 videoMax = 255 + 3/4 for 10 bit color components
 videoMax = 255 + 63/64 for 14 bit color components
 videoMax = 255 + 255/256 for 16 bit color components
MultiScale SSIM INDEX
MultiScale SSIM INDEX based on SSIM metric of several downscaled levels of original images. Result is weighted average of those metrics.
Original paper
More bright areas corresponds to greater difference.
Original frame
Compressed frame
MSSSIM (fast)
MSSSIM (precise)
There are two algorithms realized for MultiScale SSIM  fast and precise, as for SSIM metric.
Êàê è äëÿ ìåòðèêè SSIM, ðåàëèçîâàíî äâà âàðèàíòà MSSSIM áûñòðûé (fast) è òî÷íûé (precise). The difference is that the fast one uses box filter, while the precise one uses Gauss blur.
Notes:
 Due to result metric calculated as multiplication of several metric values below 1.0 visualization seems to be dark. Fast implementation visualization seems to be shifted. This effect originates from the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottomleft or upleft of the pixel (depending on if the image is bottomup or topdown).
 Levels weights (0 corresponds to original frame, while 4 corresponds to higher level):
 WEIGHTS[0] = 0.0448;
 WEIGHTS[1] = 0.2856;
 WEIGHTS[2] = 0.3001;
 WEIGHTS[3] = 0.2363;
 WEIGHTS[4] = 0.1333;
3Component SSIM INDEX
3Component SSIM Index based on region division of source frames. There are 3 types of regions  edges, textures and smooth regions. Result metric calculated as weighted average of SSIM metric for those regions. In fact human eye can see difference on textured or edge regions precisely than on smooth regions. Division based on gradient magnitude in every pixel of images.
Original paper
More bright areas corresponds to greater difference.
Original frame
Compressed frame
3SSIM regions division
3SSIM metric
SpatioTemporal SSIM
The idea of this algorithm is to use motionoriented weighted windows for SSIM Index. MSU Motion Estimation algorithm used to retrieve this information. Based on the ME results, weighting window is constructed for every pixel. This window can use up to 33 consecutive frames (16 + current frame + 16). Then SSIM Index calculated for every window to take into account temporal distortions as well.
Also another spooling technique is used in this implementation. We use only lower 6% of metric values for the frame to calculate frame metric value. Thus causes larger metric values difference for difference files.
Original paper
Source
Compressed
Metric visualization
VQM
VQM uses DCT to correspond to human perception.
Original paper
Source
Processed
VQM
MSE
Source
Processed
YYUV MSE
Information About YUV <=> RGB Tables
REC.601
This table is default YUV <=> RGB table in AVISynth.
{R [0...255], G [0...255], B [0...255]} => {Y [16...235], U [16...240], V [16...240]}
RGB to YUV
Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16 U = (0.148 * R)  (0.291 * G) + (0.439 * B) + 128 V = (0.439 * R)  (0.368 * G)  (0.071 * B) + 128YUV to RGB
R = 1.164 * (Y  16) + 1.596 * (V  128) G = 1.164 * (Y  16)  0.391 * (U  128)  0.813 * (V  128) B = 1.164 * (Y  16) + 2.018 * (U  128)
PC.601
{R [0...255], G [0...255], B [0...255]} => {Y [0...255], U [128...128], V [128...128]}
RGB to YUV
Y = 0.299 * R + 0.587 * G + 0.114 * B U = (0.147) * R  0.289 * G + 0.436 * B V = 0.615 * R  0.515 * G  0.100 * BYUV to RGB
R = Y + 1.14 * V G = Y  0.395 * U  0.581 * V B = Y + 2.032 * U
YUV Files
YUV files form a variety of "raw data" files. Now MSU Video Quality Measurement Tool supports different types of
them, but if you use .yuv files in your comparison note that
 We assumed, that U and V values in YUV files are positive.
 If you use any YUV<=>RGB table for creating YUV files from AVI (or AVI from YUV) in your program you must choose this table in the settings of MSU Video Quality Measurement Tool.
MSU Video Quality Measurement Tools
 MSU Video Quality Measurement Tool (program for objective comparison)
 MSU Human Perceptual Quality Metric (several metrics for exact visual tests)
email: 
Other resources
Video resources:
Last updated: 25August2011 

Project updated by
Server Team and MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab