Project Details
Projekt Print View

JND-based perceptual video quality analysis and modeling

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 496858717
 
Objective video quality metrics with a high correlation to perceptual quality are needed to optimize video encoding and streaming applications at near-lossless quality. To enable fine-grained quality assessment, the just-noticeable-difference (JND) methodology has been proposed. In psychophysics, the JND refers to the smallest perceptible difference between an initial and a secondary level of a given sensory stimulus. In this project, we will generate large-scale JND-based video quality datasets encoded with the latest video coding standards such as AV1 and H.266/VVC in addition to the legacy AVC/H.264 codec and will develop models for objective prediction of the JND. We also will generate JND-based image quality datasets where the images are encoded using the same codecs to encode the videos using intra-frame coding mode.In addition to conducting laboratory experiments on subjective perception to estimate the JND, we will evaluate the feasibility of replacing time-consuming and expensive traditional human-centered subjective testing in the laboratory with a scalable, faster, and less expensive crowdsourcing alternative. The objectives of this project are as follows: (1) Generation of a JND-based video quality dataset in laboratory environments with resolutions of 3840x2160, 1920x1080 pixels, and cropped versions with a resolution of 640x480 pixels using H.264/AVC, AV1, and H.266/VVC video codecs. We will also generate a JND-based image quality dataset from compressed images with the intra-frame coding mode of the video codecs. (2) Evaluation of distortion boosting techniques, e.g., flickering between the reference and distorted stimuli, to increase perceptual sensitivity on the JND assessment. We estimate the psychometric function underlying the data from subjects' responses to the distorted stimuli and calculate the proportion of the subjects (e.g., 0.75) who cannot perceive distortion in the compressed video at a given distortion index (e.g., quantization parameter (QP), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), or bitrate), which is referred to as the satisfied user ratio (SUR). (3) Exploring adaptive psychometric sampling methods to assess a more accurate and reliable JND threshold. (4) Developing methods for reliable subjective quality assessment of large-scale visual media through crowdsourcing. (5) Generation of large-scale JND-based image and video quality datasets using crowdsourcing. (6) Developing objective JND/SUR estimation methods, especially based on deep learning approaches. (7) Contributing to the standardization of JND-based subjective video quality assessment through crowdsourcing. The outcomes of this project will set the stage to allow researchers to develop and benchmark better deep learning models for JND-based image and video quality assessment.
DFG Programme Research Grants
International Connection China, Norway, United Kingdom, USA
 
 

Additional Information

Textvergrößerung und Kontrastanpassung