Detailseite
Projekt Druckansicht

Realtime Acquisition and Dynamic Modeling of Human Faces, Upper-Bodies, an Hands (D-A-C-H/LAV)

Mitantragsteller Professor Dr. Mark Pauly
Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2010 bis 2014
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 171789723
 
3D acquisition and reconstruction of dynamic objects has recently become a prominent area of research in computer science. A particularly challenging problem is the accurate digitization of humans in motion. The high complexity of human geometry and motion dynamics, and the high sensitivity of our visual system to variations and subtleties in human faces and bodies, place a high burden on accuracy and geometric consistency of the acquired geometric data and the reconstructed shape models. To mitigate these difficulties, most existing systems integrate user- or technology-assisted components, such as manual selection of feature correspondences, invasive, active illumination for sensing, or physical markers attached to the scanned subject. While substantially simplifying the reconstruction process, these system components severely limit the applicability of the 3D scanners, often requiring trained actors and custom-build hardware in costly studio setups. In addition, time-intensive offline computations are typically needed for reconstruction, often in the order of hours for a few seconds of recorded performance. Our goal is to avoid these restrictions and address the significantly more challenging problem of highly accurate and fully automatic 3D reconstruction of performing humans in realtime, based on a novel markerless and non-invasive acquisition system. To achieve this goal we will advance the state-of-the-art both at the acquisition side and the modeling/processing side, focusing on the reconstruction of human face, upper-body, and hands in a front-view desktop acquisition setting. We will design and build a novel 3D scanning system that makes use of recent technology advances in highspeed, high-resolution video cameras and 3D depth cameras based on time-of-flight sensing. New algorithms for integrating these two modalities will be developed, as well as novel geometry processing tools to filter the resulting 3D sample sets. The proposed acquisition system will consist of off-the-shelf hardware components that can be readily assembled and deployed in different application scenarios. Achieving realtime performance for 3D reconstruction imposes strong constraints on processing efficiency. We will address this challenge by shifting complexity from online computation to off-line preprocessing. Building on our extensive experience in physics-based modeling and dynamic acquisition, we will design and implement a sophisticated dynamic motion model of the human face, upper-body, and hands that is tailored and customized with a large database of pre-recorded human performances. This dynamic model will serve as a geometry and motion prior for realtime reconstruction of arbitrary subjects. We will explore a new concept of a motion phase space to significantly improve motion prediction for accurate reconstruction of fast motions. Model reduction techniques and parallel processing methods will be investigated to maximize computational performance and obtain a scalable system that adapts to the available computational resources. An important aspect of the proposed methodology is a systematic quantitative evaluation of our system. We will employ state-of-the-art marker-based motion capture technology available at the participating institutions to evaluate our reconstruction results and provide quantitative measurements on geometric accuracy and tracking precision. The proposed 3D scanning and reconstruction system will provide unprecedented detail of human geometry and motion in realtime. This allows researchers to study the intricacies of human facial expressions or hand motion. In addition, with foreseeable technology advances in the next few years, our proposed system can be directly integrated into desktop monitors or laptop computers, with a huge potential impact on consumerlevel applications. Entirely new forms of interaction will become possible with applications in interactive computer games, realtime animation, and virtual reality environments such as computer-supported training and rehabilitation, or social networks. The proposed project is a collaborative effort under the D-A-CH program and will bring together researchers from the University of Bielefeld, Germany, and the Ecole Polytechnique Federale de Lausanne, Switzerland. We plan to build a team that consists, besides the two PIs, of two Ph.D. students and one postdoctoral scholar. Frequent mutual visits and a closely coordinated collaboration will ensure that the combined expertise of both groups is leveraged to its full potential for the successful completion of the project.
DFG-Verfahren Sachbeihilfen
Internationaler Bezug Schweiz
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung