Project Details
VMAV - Cooperative micro aerial vehicles using onboard visual sensors
Applicant
Professor Dr. Richard Bamler, since 10/2014
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2014 to 2019
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 245764381
The overall aim of the project is to advance the capabilities of visual controlled MAVs in the areas of flight behavior and autonomy, cooperative operation, cognitive abilities and in addition to decrease the size of such an MAV. Advances in these areas would enable new fields of applications for MAVs and path the way to further research topics in mobile robotics. The proposed research proposal is structured into three work packages:1. Visual-inertial MAV pose estimation and localization using multi-camera systems2. Embedded vision algorithms for dynamic flight of small scale MAVs3. Methods for cooperative visual localization and semantic mappingWork package 1 will investigate the suitability of multi-camera systems for 6DOF pose estimation and localization for MAVs performing dynamic maneuvers. This will include the development of visual-inertial pose estimation algorithms exploiting the advantages of multi-camera system geometries.Work package 2 will investigate embedded computer vision algorithms to facilitate dynamic control and flight as well as a further miniaturization of MAVs. For this, specific components of the visual control system will be moved to dedicated embedded processors to achieve the necessary high-frame rates for dynamic flight.Work package 3 will investigate cooperative operation of MAVs focusing on cooperative visual localization, mapping and cognitive scene understanding and interpretation. In cooperative operation MAVs should be able to share their individual knowledge of the environment and incorporate knowledge of others with the effect of improving environment mapping and the self localization process. An important part of this research package is cognitive scene understanding. The MAVs should make use of object detection and classification methods to generate a semantic description of the environment to produce a semantically annotated 3D environment map and also to use this meta-information to improve the mapping process (e.g. adapt parameters based on the semantics) or the localization process.The proposed project will combine the competences of the three involved partners, ETHZ, TUM and TUG. All the three partners have year-long experience in vision controlled MAV through various projects and performed ground breaking work in this area. The common project will ensure the utilization of the combined expertise of the partners.
DFG Programme
Research Grants
International Connection
Austria, Switzerland
Partner Organisation
Fonds zur Förderung der wissenschaftlichen Forschung (FWF)
Participating Persons
Professor Dr. Horst Bischof; Professor Marc Pollefeys
Ehemaliger Antragsteller
Professor Dr. Friedrich Fraundorfer, until 9/2014