Project Details
Projekt Print View

Efficient representation and generation of consistent 3D and 4D maps

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Geophysics
Term from 2011 to 2019
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 166047863
 
Final Report Year 2020

Final Report Abstract

The overall goal of the project was the research and development of efficient and robust algorithms for the management and processing of large point cloud data resulting from UAV based capturing systems. According to the work plan our research in the project concentrated on three main issues. First, the development of data structures for the management and compression of large 3D point cloud data, second on robust and efficient symmetry detection on rough scales and third on the modeling of building surfaces. In the first part we completed and published our work from the first phase of the project on the compression of static point clouds. The resulting compression scheme allows for instant, i.e. realtime compression transmission and remote decompression of captured point cloud data which is necessary in the UAV-setting. Furthermore, a novel thread-safe GPU hash map data structure volume data, e.g. signed distance fields, that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level was developed. This data structure allows for real-time remote visualization of the data and therefore enables the development of teleportation and telepresence applications. For this purpose we used a low cost RGB-D acquisition setup which is either mounted on a driving robot or carried by the user and developed a telepresence system that was extensively evaluated. It is foreseen that in the next step this system will be extended to UAV scenarios. The last effort in the context of data structures for point cloud data was the development of an efficient novel approach for the hierarchical decomposition of irregularly sampled 3D geometry. The resulting hierarchy consists of several subsets of the original point cloud data at different sampling rates. As each individual point set possesses blue noise properties the sampling is very regular making standard geometric operations like normal and curvature estimation, edge detection, etc. much more stable and efficient than in the original, irregularly sampled point cloud. In the second part we research a novel efficient method for reoccurrence detection and the estimation of the corresponding pose of template objects in the captured scene geometry. This is a vital ingredient in multiple problem statements. The main challenge of the template matching approach is to keep the search computationally feasible and fast while still being robust with respect to noise or other variations that might occur in the point cloud such as partiality of objects due to scanning errors or occlusions. We developed a RANSAC based approach building on a novel, voxel based scoring method with an early exit strategy as well as a novel sampling strategy for the generation of transformation hypotheses. This novel sampling strategy is built on the sampling of stable, salient points and which exploits the locality of possible template occurrences in order to avoid the generation of unnecessary transformation hypotheses. The resulting RANSAC based approach substantially improves runtime while at the same time yields results whose correctness and completeness match or even exceed previous state of the art results. The structural analysis of both, image and geometric data allows for a more generic as well as semantic analysis of the captured environment. In the third part of the project pattern analysis and symmetry detection methods as a means to “understand“ the acquired environment were investigated. First, an auxiliary method to compute a set of consistent normal orientations on acquired point clouds was developed and exploited in the context of our novel reconstruction method for buildings including structural information about semantic entities like walls, windows and doors. Here, we developed two fully automatic methods for the analysis of 3D point clouds that allow for the reconstruction of a building information model of a complete building. While our first method was restricted to single story buildings our second method allows for the automatic reconstruction of buildings from point clouds consisting of an arbitrary number of even nested stories. Image sensor for RGB and IR data on the other hand provide means to analyze the materials of surfaces and to perform relighting operations in the context of remote visualization, e.g. in a teleportation scenario. The derivation of these surface appearance properties, especially albedo and reflectance data from point clouds was also object of our research.

Publications

 
 

Additional Information

Textvergrößerung und Kontrastanpassung