Project Details
Linking metric and symbolic levels in autonomous reinforcement learning
Applicant
Professor Dr. Klaus Obermayer
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2011 to 2020
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 200282059
Reinforcement learning (RL) has emerged as a well-founded theoretical basis for autonomous agents, but is rarely used in practical applications. One essential flaw is the curse of dimensionality, which we address in our previous SPP-1527 funded project "value representation in large factored state spaces". However, in our work we found the lack of adaptivity to new situations an equally obstructive obstacle for practical applications. Our framework contains a symbolic layer, borrowed from relational RL, to adapt the metric transitions model to new situations. We hypothesize that metric and symbolic layers of RL can work complementary to provide the precision and flexibility required in many practical applications. To investigate the possibilities of our developed framework we are pursuing the following scientific goals: (1) to develop a Bayesian method to actively learn a metric transition model with relational flexibility, and (2) to investigate synergies in the hierarchy of metric and symbolic planners to improve the planning and adjust the relational symbols and actions. The envisioned methods will cast the fields of factored MDP and relational RL into a common framework, and allow top-down and bottom-up adaptation. We expect significant improvement in relational tasks that depend strongly on an underlying metric spaces, for example in robotic applications.
DFG Programme
Priority Programmes
Subproject of
SPP 1527:
Autonomous Learning