Project Details
Projekt Print View

Laser-based Scene Interpretation in Dynamic Envrionments

Applicant Dr. Jens Behley
Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2015 to 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 289062648
 
Autonomous systems, like self-driving cars, are expected to act self-sufficient in dynamic environments with many moving objects. Essential component of such systems is a reliable and efficient scene interpretation able to localize and identify, but also to track objects in the vicinity of the system enabling the prediction of future states of the environment. Besides imaging sensors, three-dimensional laser range sensors are often utilized in outdoor environments, which enable the generation of precise point-wise range measurements in terms of a point cloud.In this project, an approach for dense interpretation of three-dimensional point clouds with temporal consistent segmentation and simultaneous classification of objects is to be developed. By this means, the approach should enable to localize single objects in a sequence of point clouds of a moving sensor, but also to track these objects over multiple time steps. Concretely, three aims for the dense interpretation of a sequence of point clouds have to be realized:1. determination of correspondences between consecutive point clouds, 2. estimation of the sensor movement by taking into account the static background and the trajectories of dynamic objects, and 3. semantic interpretation of the segment correspondences between consecutive laser scans using a classification approach.Based on a hierarchical segmentation producing a coarse-to-fine segmentation of the point cloud, corresponding segments in consecutive laser point clouds are identified and pose changes of the sensor are estimated using the static parts and moving objects. Segments in the hierarchy are classified by a segment-based classification exploiting the established segment correspondences and these classifications are used to identify task-relevant objects and to complete the semantic segmentation of the scene. For each point in time, a segmentation of the point cloud in task-relevant objects will be achieved with a locally consistent estimation of the sensor's position.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung