Detailseite
Projekt Druckansicht

Perceptually Optimal Reproduction of Color Images considering Device Limits

Antragsteller Dr. Philipp Urban
Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2008 bis 2021
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 83740676
 
Erstellungsjahr 2014

Zusammenfassung der Projektergebnisse

We developed a model to improve the prediction performance of existing color-difference formulas by utilizing additional visual data. The model is based on a Gaussian process with the color-difference formula as a mean function and uses Gaussian process regression for predicting unknown color differences considering the uncertainty of the visual data. The approach improves the prediction results of existing color-difference formulas significantly on single visual datasets. Furthermore, it reveals inconsistencies between visual datasets which were combined for fitting the parameters of existing color-difference formulas. This prevents existing formulas to perform better. We investigated whether narrowband display stimuli may replace broadband surface-color stimuli in colordifference experiments, which would allow a more convenient collection of visual data. We could not reject the hypothesis that average perceived color differences obtained in both ways agree with each other. However, we found much higher interobserver variability for color-differences judged on the display. An additional experiment indicated that this might be caused by cone-fundamental variations of our observer panel. For enhancing image-appearance models, we developed a Euclidean working color space with a minimum isometric disagreement to color-difference formulas and very low cross-contamination between color-attributes (lightness, chroma and hue), i.e. changing a predicted attribute does not affect the other perceived attributes. The performance of the new color spaces to predict available visual data is not significantly different than the performance of color-difference formulas used to compute the color spaces. This is unexpected because the space was developed to possess additional global properties (absence of cross-contamination). We developed a framework to measure the perceived difference between two images. The images are normalized with respect to the viewing conditions by an image-appearance model and transformed to the working color space developed in the previous subproject. From the resulting images, image-difference features (IDFs) are extracted and combined to predict the perceived image difference. A factorial combination of five IDFs comprising achromatic and chromatic information achieves a significant improvement of 10% over the best-performing state-of-the-art IDM on a large visual dataset comprising gamut mapping distortions. We developed an iterative method to optimize gamut mapping by minimizing this image-difference measure (IDM). Results of a visual experiment revealed that optimized images show a higher agreement with originals than the starting images of the iteration but they possess distinct artifacts. We modified the IDM by addressing these artifacts (particularly by adding two IDFs) with the aim to create artifact-free optimization results. The resulting IDM allow artifact-free gamut-mapping optimizations retaining contrast, structure, and particularly color of the original image to a great extend. The IDM-based gamut mapping optimization significantly outperforms a state-of-the-art spatial gamut-mapping algorithm. Interestingly, a multiscale version of the IDM has the highest correlation to the largest visual database (TID2013) possessing conventional distortions (noise, blur, compression artifacts) without dedicated training on these distortions. We adapted the concept of the recently proposed HDR color spaces to enhance the normalization step of the IDM to account for HDR images. This allows to simultaneously perform tone and gamut mapping by minimizing the IDM. A psychophysical experiment on an HDR display show that such HDR gamut mapping outperforms the conventional workflow of subsequent HDR tone and gamut mapping particularly in case of small gamut sizes. Spectral gamut mapping is one of the most important modules of the spectral reproduction workflow because most reflectances are physically not reproducible by a given printing system. To achieve a reproduction that is as visually correct as a colorimetric reproduction under one illuminant and is superior under a set of other illuminants, the metamer mismatch-based spectral gamut mapping framework was proposed in previous work. We modified the framework by replacing the metamer-mismatch by paramer-mismatch gamuts, allowing non-noticeable color changes of previous mappings for the benefit of an increased spectral and colorimetric variability for subsequent mappings. Such paramer mismatch-based spectral gamut mapping improves the reproduction under less important illuminants without adverse color shifts under important illuminants. For the spectral image reconstruction, we proposed a spatio-spectral Wiener method that preserves edges. The method was derived by Bayesian inference and shows improved prediction performance over the spectral Wiener and spatio-spectral Wiener estimation on multiple test images and various noise levels.

Projektbezogene Publikationen (Auswahl)

  • Spectral Image Reconstruction using an Edge Preserving Spatio-Spectral Wiener Estimation. Journal of the Optical Society of America A, Vol. 26, Issue 8, pp. 1868-1878 (2009)
    Philipp Urban, Mitchell R. Rosen and Roy S. Berns
  • Upgrading Color-Difference Formulas. Journal of the Optical Society of America A, Vol. 27, Issue 7, pp. 1620-1629 (2010)
    Ingmar Lissner and Philipp Urban
  • Analyzing small suprathreshold differences of LCD-generated colors. Journal of the Optical Society of America A, Vol. 28, Issue 7, pp. 1500-1512 (2011)
    Philipp Urban, Maria Fedutina and Ingmar Lissner
  • Paramer Mismatch-based Spectral Gamut Mapping. IEEE Transactions on Image Processing, Vol. 20, Issue 6, pp. 1599-1610 (2011)
    Philipp Urban and Roy S. Berns
  • Toward a Unified Color Space for Perception-Based Image Processing. IEEE Transactions on Image Processing, Vol. 21, Issue 3, pp. 1153-1168 (2012)
    Ingmar Lissner and Philipp Urban
  • Image-Difference Prediction: From Grayscale to Color. IEEE Transactions on Image Processing, Vol. 22, Issue 2, pp. 435-446 (2013)
    . Ingmar Lissner, Jens Preiss, Philipp Urban, Matthias Scheller Lichtenauer and Peter Zolliker
  • Color-Image Quality Assessment: From Prediction to Optimization. IEEE Transactions on Image Processing, Vol. 23, Issue 3, pp. 1366-1378 (2014)
    Jens Preiss, Felipe Fernandes and Philipp Urban
    (Siehe online unter https://doi.org/10.1109/TIP.2014.2302684)
  • Image-Difference Prediction: From Color to Spectral. IEEE Transactions on Image Processing , Vol. 23, Issue 5, pp. 2058-2068 (2014)
    Steven Le Moan and Philipp Urban
    (Siehe online unter https://doi.org/10.1109/TIP.2014.2311373)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung