Project Details
Projekt Print View

IP1: Incorporating Short-Term Spatial-Temporal Information for Robotic Sensing

Subject Area Plant Cultivation, Plant Nutrition, Agricultural Technology
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 459376902
 
Robotic platforms are designed to traverse an environment in a continuous manner in order to automate tasks. Key to achieving these tasks are the robotic vision algorithms which can interpret the scene. Despite the impressive performance achieved by vision-based segmentation systems, they generally ignore that the robot is moving through a structured environment and can capture temporal (video) data. Both of which are rich sources of information.This becomes a rich source of prior information for robotic vision in a horticultural environment, particularly for the task of semantic segmentation. This is because the scenes can be assumed to be relatively static, from (t-1) to t, and so there has been limited change in the scene. We should therefore be able to more easily estimate the current view point t by exploiting the results obtained from the previous view point (t-1). Yet, to date research has been exploiting either spatial information or temporal information alone but not jointly.In this project we will research how to combine both how far the robot has moved (spatial) and video data (temporal) to improve the performance (accuracy and/or robustness) of robotic vision systems. This information will be embedded within deep learning systems to greatly improve the state of semantic segmentation in horticultural environments.
DFG Programme Research Units
 
 

Additional Information

Textvergrößerung und Kontrastanpassung