Project Details
Robotic-Specific Machine Learning
Applicant
Professor Dr. Oliver Brock
Subject Area
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2017 to 2024
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 329426068
This project will develop robotics-specific machine learning methods that enable robots to efficiently learn complex behavior. The requirement for such methods follows directly from the no-free-lunch theorems (Wolpert, 1996) which prove that no machine learning method works better than random guessing when averaged over all possible problems. The only way to improve over random guessing is to restrict the problem space and incorporate prior knowledge about this problem space into the learning method.Of course, there are machine learning methods that apply to a wide range of real world tasks by incorporating fairly general priors, e.g. smoothness. However, even for solving relatively simple problems, such methods already require huge amounts of data and computation. The overall problem of robotics---learning behavior that maps a stream of high-dimensional sensory input to a stream of high-dimensional motor output from sparse feedback---is too complex to be solved by generic machine learning methods using realistic amounts of data and computation. Other approaches avoid this problem by incorporating task-specific prior knowledge, e.g. by engineering features and representations tailored to the robotic task at hand. However, these approaches do not generalize to new tasks.This project proposes a middle ground between general and task-specific approaches to learning robot behavior. The key idea is to incorporate robotics-specific prior knowledge, i.e. priors that are consistent with a wide range of robotic tasks. Applying this idea requires two steps: a) discovering robotics-specific prior knowledge and b) incorporating these priors into machine learning methods. We can discover such priors by looking at the structure inherent in the interactions of robots and the physical world (e.g. physics, embodiment, objectness). To incorporate such priors, we will relate them to internal state representations, which are an intermediate result in the mapping from the robot's sensory input to its motor output. Technically, we will incorporate robotics-specific priors by i) defining appropriate learning objectives and by ii) restricting the hypothesis space. Our work will eliminate the need for task-specific feature engineering while keeping the data and computation requirements at a minimum. As a consequence, this project will enable robots to autonomously learn complex tasks from raw sensory input. We have extensive preliminary work showing the feasibility and the potential of our approach. This project will develop this idea further by:1) Online learning during the interaction2) Solving partially observable tasks by simultaneously learning task-specific state representations and recursive loops to estimate them3) Learning structured state representations that make reinforcement learning more efficient for robotic tasks4) Incrementally learning state representations for multiple related tasks
DFG Programme
Research Grants