Project Details
Projekt Print View

Implicit mobile human-robot communication for spatial action coordination with context-specific semantic environment modeling

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Human Factors, Ergonomics, Human-Machine Systems
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 502483052
 
The use of robots in the industry as well as in the work and everyday life is becoming more and more flexible. Current methods for machine learning and adaptive motion planning are leading to a more robust behavior and a higher autonomy of the robots. Nevertheless, collaborative human-robot interactions still happen to have interruptions and breakdowns in cases where the human is not able to comprehend the robot's movement behavior. A common cause is that the human has an incorrect or limited picture of what the robot is currently perceiving and what its internal state is. This could be avoided if the robot could understand and incorporate the mental states and the perspective of the interaction partner in its own action generation in order to actively generate a common understanding of the interaction.A key competence for such a collaboration between humans and robots is the ability of communication and mutual coordination via implicit signals of body language and movement. The project investigates the implicit human-robot communication in collaborative actions by using the example of the joint construction of a shelf. In experimental studies, situations will be created and recorded in which the interaction and perception between the human and the robot is disturbed. On the one hand, new perception methods are explored, that robustly detect interaction-relevant features based on head and body poses and facial expressions against occlusions. These are interpreted in the context of the action and the environment, so that implicit communication signals (e.g., turning toward, turning away, compliance, hinting, etc.) and internal states (e.g., approval, disapproval, willingness to interact, etc.) can be inferred. On the other hand, new methods are being explored to make the robot infer the perspective and the state of the human interlocutor in its own action planning and actively requests user reactions.This leads to a spatial coordination of the partners during the construction of the shelf by taking into consideration the mutual perception and the goal of the action. Via an active use of body pose, relative orientation and movement of the robot, conflict situations can be solved in advance without the need for explicit instructions to the robot.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung