Detailseite
Projekt Druckansicht

Visuelle Navigation mobiler Roboter mittels adaptiver Verfahren zur Bestimmung des optischen Flusses

Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2006 bis 2012
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 17096584
 
Erstellungsjahr 2012

Zusammenfassung der Projektergebnisse

In this project, we addressed local visual homing methods based on optical flow techniques and navigation strategies building on visual homing methods. Visual homing is the capability of a robot to return to a previously visited place under visual control. Therefore, a view taken at the home position is stored (referred to as snapshot). For homing the currently perceived view is compared to the snapshot in order to derive a movement decision guiding the agent towards the goal. As views, we use panoramic images with full 360 horizontal field of view as obtained by a catadioptric camera setup. Among the existing homing methods, we mainly considered holistic methods and methods solving the correspondence problem by computing the optical flow. Furthermore we concentrated on methods which can operate on arbitrarily aligned images because most of the existing methods can only operate on images being aligned w.r.t. a common reference direction and therefore require a (visual) compass. Holistic homing methods use the entire image for computing the home direction. Warping methods, one type of holistic homing methods, therefore predict how the current view would change according to certain movements of the robot and search for the movement parameters leading to the predicted image which is most similar to the snapshot. During the course of this project, we were able to extend the original warping method operating on one-dimensional images onto two-dimensional images and to consecutively improve it considerably w.r.t. computational efficiency and homing accuracy. Our 2d-warping method is currently our best homing method, and allows for robust navigation even in dynamically changing environments (“Changing environments”). It is therefore used as building block for our vision-based navigation strategies of autonomous cleaning robots. For the second group, our flow-line matching algorithm developed during the first project phase was extended to operate on arbitrarily aligned images by integrating a compass step into the homing method. However, it does not yet achieve the computational efficiency and homing accuracy of our 2d-warping method although we revealed a close relationship between the methods (“Arbitrarily aligned snapshots”). The drawback of local visual homing is that homing is only possible in a limited area around the snapshot referred to as catchment area. For navigation beyond this region, several snapshots and their relations have to be stored in a topological map. In this project, we used such an approach for the navigation of an autonomous cleaning robot. The application requires to completely and systematically cover the robot’s accessible workspace while keeping the portion of uncleaned areas and repeated coverage as small as possible. As an extension of our triangulation method developed in the first project phase, we implemented a navigation strategy based on the extended Kalman-filter to estimate the robot’s position and to cover rectangular segments by parallel lanes. The robot’s position is iteratively corrected by taking the bearing from the robot’s current position towards several snapshots taken along the previous lane. Thus, we do not use homing in its original sense as a means to approach a goal location, but in order to estimate angular relations between the current and former robot positions. This method can be used as building block to cover more complex workspaces by combining several segments of parallel lanes (“Kalman-filter framework”). For cleaning strategies covering complex workspaces by combining several cleaning segments, it is essential to reliably detect loop closures, i.e. places which have already been visited. This detection has to be done based on visual information only, because the robot’s position estimate can drift over time or because purely topological maps without estimates of the robot’s position are used. We investigated two approaches to visual loop-closure detection: The first relies on pixel-by-pixel comparisons of entire images. In order to reduce the influences of illumination changes onto the compared images, images are preprocessed prior to comparison. The results obtained for this approach show that the methods can achieve very accurate loop-closure detection even for strong changes of the illumination. However, the methods are not applicable on a real robot due to their large computational complexity: pixel-by-pixel comparisons require the images to be aligned w.r.t. a common reference direction and hence have to incorporate a compass method. The second approach is to transform the images into a lower-dimensional representation referred to as image signature and comparing signatures instead of entire images. The advantages of signature-based methods are (i) that signatures are rotationally invariant and therefore images do not have to be aligned w.r.t. a common reference direction and (ii) that comparing signatures is much more efficient. Reliable loop-closure detection under small or moderate changes of the illumination is possible for signatures with a dimensionality of only 5% of the original image size. For stronger changes of the illumination, our signature-based approach does not yet reach the performance of methods relying on pixel-by-pixel comparisons (“Loop-closure detection”). As changes of the illumination conditions can considerably change the appearance of the images used for navigation, robustness against changes of the illumination is an essential aspect for every appearance-based navigation method. Our navigation strategies typically consist of three processing stages: image acquisition, image preprocessing and the actual navigation method. We therefore addressed the problem of achieving illumination invariance by reducing the influences on each of the three processing stages. For the first stage, we implemented a camera controller adjusting the camera parameters in order to keep the average image brightness constant. For the second and third stage, we tested different image preprocessing methods and image comparison functions in the context of homing and loop-closure detection. By this means, we could identify robust combinations of preprocessing and dissimilarity functions, and could considerably improve the robustness of our navigation strategies against changes of the illumination (sub-project “Adaptive methods for illumination invariance”). For evaluation of real robot experiments (both for homing methods and for cleaning strategies), we developed two robot tracking systems. One system is a multi-camera system mounted statically in our lab whereas the second system is an active single-camera system designed to be portable for out-of-lab experiments. Both systems are valuable tools for analyzing the developed methods (sub-project “Robot tracking systems”). For future work, we will continue our research on vision-based navigation of autonomous cleaning robots. Based on the results of this project an industrial cooperation project was started to further pursue this working direction. Several results obtained within this project can be used as starting point for future work in the field of simultaneous localization and mapping (SLAM) using omnidirectional images as main sensory information. In contrast to the currently used topological mapping algorithms, SLAM methods improve all position estimates after adding the corresponding place node to the map by spatio-temporal sensor-data fusion. These working directions include hierarchical trajectory-based SLAM, parsimonious trajectory-based SLAM and feature-based bearing-only SLAM. Hierarchical SLAM methods divide the robot’s workspace into several sub-regions and locally correct the positions within these sub-regions before globally correcting spatial relations between sub-regions. Due to subdividing the entire workspace into smaller segments, these methods are especially interesting for our cleaning strategies. For position correction within a single cleaning segment, extensions of our existing work on the “Kalman-filter framework” can be applied. These methods are closely related to trajectory-based SLAM algorithms because they also use former robot positions characterized by snapshots as landmarks instead of using external features as landmarks and estimating their position in the world. Our signature-based loop-closure detection methods can easily be applied for topological navigation and parsimonious trajectory-based SLAM methods. We see the area of applicability for these methods especially for robots with limited computational power and restricted storage capabilities. By applying multi-dimensional scaling (MDS) techniques or modern variants of MDS to pairwise signature dissimilarites, the spatial arrangement of former robot positions in space can be optimized depending on the distances in the signature space. The third approach starts with our landmark-tracking variant of flow-line matching (“Landmark tracking and segmentation”). In conjunction with odometry estimates of the traveled distance, this method can be extended to a bearing-only visual SLAM method estimating a feature’s position w.r.t. world coordinates by triangulation. As a further application of our signature based methods we will work towards inferring spatial information such as doors, passages or different regions in the robot’s workspace based on clustering image signatures. Since we already started an industrial cooperation in the field of indoor cleaning robots, this will be the primary area of application for the methods developed in this project. Beyond that, the results can be applied to other indoor and outdoor tasks involving complete coverage navigation. This includes lawn mowing for domestic robots, and seeding, fertilizing, or harvesting for agricultural robots. For the latter application, position estimates obtained by GPS could be fused with estimates obtained by vision-based methods in order to avoid additional costs for the application of differential GPS.

Projektbezogene Publikationen (Auswahl)

  • „Neuroethological concepts at work: Insect-inspired methods for visual robot navigation“. In: Biological Approaches for Engineering. 2008, S. 91–94.
    R. Möller u. a.
  • „A vision-based trajectory controller for autonomous cleaning robots“. In: Autonome Mobile Systeme 2009. Springer, 2009, S. 65–72.
    L. Gerstmayr u. a.
  • „Local visual homing by warping of two-dimensional images“. In: Robotics and Autonomous Systems 57.1 (2009), S. 87–101.
    R. Möller.
  • A comparison of four distance measures for 2D warping methods. Technischer Bericht. AG Technische Informatik, Technische Fakultät, Universität Bielefeld, 2010.
    R. Möller
  • An alternative distance measure for 2D warping methods. Technischer Bericht. AG Technische Informatik, Technische Fakultät, Universität Bielefeld, 2010.
    R. Möller
  • „Three 2D-warping schemes for visual robot navigation“. In: Autonomous Robots 29.3 (2010), S. 253–291.
    R. Möller, M. Krzykawski und L. Gerstmayr.
  • Applying Min-Warping to View Reconstruction. Technischer Bericht. AG Technische Informatik, Technische Fakultät, Universität Bielefeld, 2011.
    M. Krzykawski
  • „Parsimonious Loop-Closure Detection based on Global Image-Descriptors of Panoramic Images“. In: Proceedings of the 15th International Conference on Advanced Robotics (ICAR 2011). 2011, S. 576–581.
    L. Gerstmayr-Hillen u. a.
  • Loop-Closure Detection and Visual Compass Based on Global Image Comparisons. Technischer Bericht. AG Technische Informatik, Technische Fakultät, Universität Bielefeld, 2012.
    L. Gerstmayr-Hillen.
  • Visual Loop-Closure Detection Based on Global Image Signatures. Technischer Bericht. AG Technische Informatik, Technische Fakultät, Universität Bielefeld, 2012.
    L. Gerstmayr-Hillen
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung