Detailseite
Projekt Druckansicht

Bild-basierte Repräsentation von Kleidung für virtuelle Anproben

Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2011 bis 2016
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 197264547
 
Erstellungsjahr 2015

Zusammenfassung der Projektergebnisse

In the previous project, we developed a novel image-based rendering approach to photo-realistically render, synthesize and animating images of articulated objects, e.g. humans, clothes etc. The goal of this renewal period was to analyze if the developed methods can be adapted to the synthesis of non-articulated objects, especially facial expressions. The basic concept of the developed animation technique is to rely on example images instead of costly computational mesh deformation and rendering techniques. Our approach exploits the fact that all characteristic details are captured by the images but often lost during shape and reflection modeling. These details are extracted from the images as mesh-based warps both in the spatial as well as photometric domain such that they can serve as appearance examples to guide a complex animation process. The proposed approach shifts computational complexity from the rendering phase to an a-priori training phase and rendering amounts to warping a set of images. The main contributions, achievements and scientific findings of this renewal project can be summarized as follows:  We have developed new representations of facial expressions that combine pose-dependent geometry and appearance. Following the concept of pose-space image-based rendering, in this representation, large scale motion as well as rough shape is modelled by a coarse 3D model while small details are captured by appearance and represented by a large database of images. Several pose-space databases of facial expressions with different subjects, varying number of camera views and textural information have been set up, both based on existing data as well as on newly captured data. Based on a low-dimensional parameterization of pose/expression capturing characteristic appearance properties of facial expressions and spanning the domain of all possible facial expressions (the posespace), for each dataset, a mapping between pose and characteristic facial appearances has been established. Based on new input descriptors, images of new facial expressions can be synthesized by interpolating the examples in the space spanned by the example descriptors using scattered data interpolation methods. The applications of this work are manifold. One possible application for the proposed methods is performance-driven facial animation, i.e. the transfer of the facial expressions of one person to another. Another application is the synthesis of photorealistic facial expressions in low-cost real-time applications like computer games and augmented reality.

 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung