Project Details
Projekt Print View

Learning the Context in Programming by Demonstration of Manipulation Tasks

Subject Area Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2014 to 2018
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 255319423
 
State-of-the-art service robots are able to learn new manipulation tasks consisting of multiple, defined actions, based on the observation of a human teacher, known as Programming by Demonstration (PbD). The logic structure of the task characterized by branches or alternative actions, can be learned using symbolic PbD-approaches. Subsymbolic PbD-approaches allow to learn atomic actions, which generate executable robot motions.. However, no information about the context, in which the atomic action can be executed, is generated. In order to execute a manipulation task autonomously, an operational description of the context is necessary. The problem to solve is, what are relevant, measurable object properties and how can they be autonomously observed with high reliability by the robot. Visual perception alone is insufficient since the variety of objects in the human environment leads to a large number of ambiguities and false positives. The basis to resolve these issues is a selection of suitable object classifiers, a reduced number of objects to classify and the definition of search regions. Additionally, sensor actions have to be defined to determine object properties, which can't be measured visually. In most state-of-the-art systems, this knowledge is defined manually. Thus, the goal of this project is to extend the PbD-approach to learn the context of a manipulation task based on human observation. In order to achieve this goal, we will integrate and extend methods from scene analysis into the PbD-approach. The second goal is to resolve ambiguities and detect false positives of visual perception algorithms using the learned contexts. We will integrate and extend methods from interactive object detection into the PbD-approach to learn sensor actions efficiently to measure non-visual object properties, e.g. weight, and thereby resolve ambiguities and false detections. Based on the learned context, we can infer the role of unknown objects in the environment, e.g. based on the spatial relation to known objects. The execution of learned manipulation tasks with an object, which was unknown , is without generalization processes not possible. Thus, increasing the generalization capabilities of the robot by interactively adapting learned constraints and goals of a manipulation task to a novel object. The application of morphing methods is planned to transform constraints and goals on the basis of 3D-object-models. With the help of simulation techniques the verification of the transformation and its adjustment is done interactively. The developed algorithms will be implemented on real robot anthropomorphic systems and evaluated using real world examples.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung