Project Details
Projekt Print View

Anticipative Human-Robot Collaboration (P8)

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Term from 2017 to 2021
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 313421352
 
Effective human-robot collaboration requires the robot system to observe human actions and to make predictions on the future state of the collaborative work space, in order to generate anticipatory robot behavior. The objective of this project is to model a shared human-robot workspace, predict semantically meaningful future states on multiple spatio-temporal scales, and plan anticipatory robot actions for realizing human-robot collaboration. We will extend the small work space for collaborative manipulation developed in the first-phase project "Learning Hierarchical Representations for Anticipative Human-Robot Collaboration" to a larger collaborative mobile manipulation scenario. As the work space will be observed from multiple viewpoints, the semantic perception and prediction of individual views must be merged to a 3D allocentric semantic model of the collaborative scene and its changes. We will explicitly model the intrinsic sensor calibrations and the 3D transformations between the sensor coordinate frames and the allocentric scene frame. The scene will be modeled by simple localized 3D representations, like occupancy grids, surfels or distance fields, their semantic attributes like class probabilities, and their changes. The entire model will be a differentiable function graph. The parameters of these transformations and representations will be learned by instantaneously predicting individual views, given the other views that are aggregated in the 3D representation. This will yield e.g. camera pose estimates. The predictive aspects of the representations will be learned by predicting future views from aggregated past views. As the intentions and actions of the agents in the scene, i.e. humans and robots, have important predictive information, predictions of future scene states will be conditioned not only on the current scene state, but also on human or robot intents and actions. This yields a predicted semantic state of the joint human-robot work space on multiple levels of spatio-temporal abstraction, which will be used to plan anticipative robot behavior in a coarse-to-fine abstract-to-concrete manner. We will demonstrate the utility of our approach in collaborative mobile manipulation tasks, where the robot supports the human by providing the needed objects in the right order at the right moment.
DFG Programme Research Units
 
 

Additional Information

Textvergrößerung und Kontrastanpassung