Detailseite
Projekt Druckansicht

Interpretation von Umgebungen durch inkrementelles Lernen

Fachliche Zuordnung Geodäsie, Photogrammetrie, Fernerkundung, Geoinformatik, Kartographie
Förderung Förderung von 2011 bis 2020
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 166047863
 
Erstellungsjahr 2020

Zusammenfassung der Projektergebnisse

The goal of the project was the semantic interpretation of 3D maps and sensor data like RGB images captured by a UAV. An important aspect was the incremental learning of the classifiers including the possibility of adding new categories since it cannot be assumed that all categories are known apriori in the context of mapping on demand (MOD). A second important aspect to investigate was the ability to localize semantic classes in regions that have not been observed yet. In the context of 3D reconstruction, this is also known as semantic scene completion, which combines 3D reconstruction and semantic interpretation. As part of the project, we developed an approach for semantic segmentation that runs in realtime on the UAV. Compared to a standard random forest, we reduced the runtime by factor 192 while increasing the global pixel accuracy by 4 percentage points. We furthermore developed approaches for random forests that allow to incrementally learn new categories. The approach achieves 96% relative performance compared to a classifier trained with all categories together. We also introduced the concept of new unknown categories for domain adaptation, which is the task of compensating differences between the training data (source domain) and the data (target domain) that needs to be interpreted. While previous works for domain adaptation assumed that both domains contain the same categories, we proposed a first approach that can deal with new unknown categories in the target domain that were not part of the source domain. The corresponding work on open set domain adaptation received Honorable Mention for Marr Prize at ICCV 2017. If the data is transferred, a higher segmentation accuracy can be achieved due to better computational resources including GPUs that were not part of the UAV. For this part, we changed the methodology to convolutional neural networks since they achieve a higher accuracy if GPUs are available. Since there was a lack of a publically available large-scale dataset, we created a new large-scale benchmark for semantic point cloud segmentation and semantic scene completion. The proposed extensions of previous works for semantic segmentation and semantic scene completion outperform the state-of-the-art.

Projektbezogene Publikationen (Auswahl)

  • (2015). From categories to subcategories: Large-scale image classification with partial class label refinement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
    Ristin, M., Gall, J., Guillaumin, M., and Van Gool, L.
    (Siehe online unter https://doi.org/10.1109/CVPR.2015.7298619)
  • (2016). Incremental learning of random forests for large-scale image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3):490–503
    Ristin, M., Guillaumin, M., Gall, J., and Van Gool, L.
    (Siehe online unter https://doi.org/10.1109/TPAMI.2015.2459678)
  • (2017). Open set domain adaptation. In International Conference on Computer Vision (ICCV), pages 754–763
    Panareda Busto, P. and Gall, J.
    (Siehe online unter https://doi.org/10.1109/ICCV.2017.88)
  • (2017). Thinking outside the box: Spatial anticipation of semantic categories. In British Machine Vision Conference (BMVC)
    Garbade, M. and Gall, J.
    (Siehe online unter https://doi.org/10.5244/C.31.90)
  • (2019). 3D semantic scene completion from a single depth image using adversarial training. In IEEE International Conference on Image Processing (ICIP)
    Chen, Y.-T., Garbade, M., and Gall, J.
    (Siehe online unter https://doi.org/10.1109/ICIP.2019.8803174)
  • (2019). Semantickitti: A dataset for semantic scene understanding of lidar sequences. In International Conference on Computer Vision (ICCV)
    Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J.
    (Siehe online unter https://doi.org/10.1109/ICCV.2019.00939)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung