Detailseite
Projekt Druckansicht

Räumliche und zeitliche Filterung von Tiefendaten für Telepräsenz

Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2016 bis 2020
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 327589909
 
Erstellungsjahr 2019

Zusammenfassung der Projektergebnisse

In this project, our main objective was to filter the data of depth cameras to improve the quality of 3D representations for telepresence systems. To approach this problem, we firstly conducted tests to understand the nature of noise in depth images. Hence, we considered mathematical fitting functions and modeling of planes through the ground truth data and the extracted noise data to perceive the difference among them in varying test conditions. Then, to remove those noises and eventually to smooth every depth pixel, we developed a new real-time spatio-temporal filter to simultaneously stabilize distorted depth data in the spatial and temporal domains. We basically developed a composition of a novel depth outlier detection method, motion estimation for depth cameras and spatio-temporal depth filtering. The motion estimation method helped us to remove further depth artifacts (e.g., ghosting) caused due to rapid motion inside dynamic scenes. After compiling the spatio-temporal neighborhood of all depth pixels in a frame, we basically inserted a robust outlier detection and removal step using a Least Median of Squares linear regression method. Our method works in real-time to clean away the outliers and filter the spatio-temporal depth neighborhood. Our motion estimation method involves a gradient-based motion estimation and correction procedure that removes the well-known ghosting artifacts from the final results. Then we tested our method on the recorded depth data in a multi-camera setup which showed that our method efficiently cleans the depth images, hence resulting in a better 3D representation of the recorded scene. Moreover, for evaluating our proposed method, we assess the extracted noise data against the ground truth data based on the viewing angle, distance from the camera and the lighting conditions.

Projektbezogene Publikationen (Auswahl)

 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung