Project Details
Projekt Print View

Spatial and Temporal Filtering of Depth Data for Telepresence

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2016 to 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 327589909
 
Final Report Year 2019

Final Report Abstract

In this project, our main objective was to filter the data of depth cameras to improve the quality of 3D representations for telepresence systems. To approach this problem, we firstly conducted tests to understand the nature of noise in depth images. Hence, we considered mathematical fitting functions and modeling of planes through the ground truth data and the extracted noise data to perceive the difference among them in varying test conditions. Then, to remove those noises and eventually to smooth every depth pixel, we developed a new real-time spatio-temporal filter to simultaneously stabilize distorted depth data in the spatial and temporal domains. We basically developed a composition of a novel depth outlier detection method, motion estimation for depth cameras and spatio-temporal depth filtering. The motion estimation method helped us to remove further depth artifacts (e.g., ghosting) caused due to rapid motion inside dynamic scenes. After compiling the spatio-temporal neighborhood of all depth pixels in a frame, we basically inserted a robust outlier detection and removal step using a Least Median of Squares linear regression method. Our method works in real-time to clean away the outliers and filter the spatio-temporal depth neighborhood. Our motion estimation method involves a gradient-based motion estimation and correction procedure that removes the well-known ghosting artifacts from the final results. Then we tested our method on the recorded depth data in a multi-camera setup which showed that our method efficiently cleans the depth images, hence resulting in a better 3D representation of the recorded scene. Moreover, for evaluating our proposed method, we assess the extracted noise data against the ground truth data based on the viewing angle, distance from the camera and the lighting conditions.

Publications

  • Robust Enhancement of Depth Images from Depth Sensors, Computers & Graphics Journal, Volume 68, Pages 53-65, 2017
    Islam, ABM T.; Scheel, C.; Pajarola, R. & Staadt, O.
    (See online at https://doi.org/10.1016/j.cag.2017.08.003)
  • gSMOOTH - A Gradient-based Spatial and Temporal Method of Depth Image Enhancement, Computer Graphics International (CGI)’18, Bintan, Indonesia, Pages 175-184, 2018
    Islam, ABM T.; Luboschik, M.; Jirka, A. & Staadt, O.
    (See online at https://doi.org/10.1145/3208159.3208166)
  • “Fusing Spatial and Temporal Components for Real-Time Depth Data Enhancement of Dynamic Scenes.,” Ph.D. Thesis, University of Rostock, Institute for Visual and Analytic Computing, 2019
    Islam, ABM T.
    (See online at https://doi.org/10.18453/rosdok_id00002542)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung