Project Details
Fine-grained visual quality assessment and modeling for high-fidelity compressed images
Applicant
Dr. Mohsen Jenadeleh
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
since 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 496858717
Recent advances in image compression, storage, and display technologies have enabled the widespread use of high-quality visual content in consumer and professional workflows. Codecs such as JPEG XL, AVIF, and learning-based methods like JPEG AI aim to maintain high visual fidelity, where preserving subtle image details is essential. This increases the need for accurate quantification of fine-grained quality differences, particularly near or below the just-noticeable difference (JND) threshold. This requirement is especially critical for learning-based models that use quality metrics as objective functions during training. However, reliably quantifying such subtle differences remains a key challenge. Most objective image quality assessment (IQA) metrics are typically calibrated and validated using data obtained from subjective experiments with coarse distortion levels, such as absolute category ratings (ACR) using ordinal scales. While these methods are effective for detecting prominent distortions, they lack the sensitivity and granularity necessary to accurately estimate subtle quality differences. Moreover, emerging learning-based compression methods introduce novel artifacts that differ from those produced by conventional codecs. These artifacts are often underrepresented in current IQA datasets and may not be reliably quantified by existing objective metrics. To address these limitations, a few datasets have been constructed using pairwise or triplet comparisons. However, such datasets are limited in scale and codec diversity, and existing models often fail to fit empirical response distributions. In this project, we will develop a more reliable subjective model for triplet comparison data by hierarchically modeling observer, stimulus, and codec variability, and by incorporating indecision responses and lapse errors. We will evaluate sampling strategies to reduce annotation costs while ensuring precise estimation of fine-grained impairments on the JND scale. We will create a large-scale, diverse crowdsourced IQA dataset of images encoded with both conventional and learning-based compression methods to support benchmarking and the development of objective approaches, particularly deep learning models, for fine-grained IQA. We will investigate the replicability of high dynamic range (HDR) IQA results obtained in fully controlled laboratory environments with calibrated HDR displays by comparing them with results from partially controlled settings using HDR-capable displays and from crowdsourced experiments with tone-mapped content. We will also assess whether multidimensional representations of image quality explain the observed triplet responses better than single-dimensional models. The outcomes of this project will contribute to international standardization efforts, particularly JPEG AIC-4, to advance and benchmark objective IQA metrics for fine-grained quality assessment.
DFG Programme
Research Grants
International Connection
Japan, Portugal, Switzerland, United Kingdom
