Project Details
Sensor fusion for autonomous robotics in unstructured environments
Applicant
Professor Dr.-Ing. Dietrich Paulus
Subject Area
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2011 to 2017
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 195328341
Terrain classification, navigation, and mapping for autonomous mobile system will be realized for the non-urban environments. The methods will work on data from 3D laser scanners, position and odometry sensors, GPS, and color cameras. By fusion of sensor data, additional information is integrated into the laser measurements and merged to create a semantic 3D map. The approach will be able to deal with a number of uncertainties in unstructured terrain, without need of map or route information. Therefore, the terrain must be analyzed and interpreted as exactly as possible in order to plan the best possible route.Detected obstacles are classified as static or dynamic objects, such as a forest, road or vehicle, and semantics as entered in the 3D map. Thus the autonomous vehicle can decide and plan a path for safe navigation in unknown terrain. The approach described is different from others as a complete fusion and sensor integration for navigation and creating a real-time semantic 3D map is implemented. The first extension to the existing projects provides for the creation of a semantic Markov random field for terrain classification. This will enable us to integrate detected objects in the environment of the robot into the classification process.This includes for example the knowledge, to drive a vehicles on roads or drivable surfaces as well as possible, which can be used for adapting the classification in areas in which other dynamic obstacles are found.The second extension is the process developed to create maps from 3D laser scans. New methods are developed, which allows the robot to maintain an existing map over a longer period of time, and to expand, even if the robot has been powered down and restarted. In addition, a new idea will be developed and tested to improve the robustness of the mapping, which maintains several fundamentally different hypotheses about the course of the trajectory. Especially not now not meant to be considered only one probable trajectory. Thus, it is possible to make better decisions about the data association.
DFG Programme
Research Grants