Detailseite
Projekt Druckansicht

Reinforcement learing in a continuous state space

Fachliche Zuordnung Mathematik
Förderung Förderung von 2008 bis 2015
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 79254695
 
This project considers discretization approaches for continuous state spaces arising in optimal control and reinforcement learning and aims to make significant progress for function approximation in more than 6 dimensions using an adaptive sparse grid approach. In both closely related application areas one uses dynamic programming methods to (numerically) estimate the utility of taking actions in states of the domain; this gives rise to functions over the state space from which then optimal policies can be derived. Discretization techniques based on standard finite-element or finite-difference techniques are widely used, but since the number of dimensions can be large one runs into the curse of dimensionality. We could show empirically that an adaptive sparse grid approach can handle typical optimal control type problems in up to 6 dimensions and therefore break the curse of dimensionality to a certain extent, although there are theoretical and practical problems for the current algorithm. It has to be investigated further under which conditions the numerical procedure can be successful. This includes further empirical investigations to push the efficiency and dimensionality boundary further, but in particular we aim to formulate a solid theoretical foundation of a finite difference style sparse grid approach to allow suitable selections of the arising parameters of the numerical scheme.
DFG-Verfahren Schwerpunktprogramme
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung