Project Details
Projekt Print View

Algorithm Control: Efficient Learning to Control Algorithm Parameters

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 442750095
 
In the last decade, research on algorithm configuration has shown that many algorithms are sensitive to their parameter configurations. To achieve peak performance, these parameters have to be adjusted accordingly to the problem instances at hand. This applies in particular to AI algorithms. To avoid tedious and error-prone manual parameter tuning, configuration systems automate this process by searching for a well-performing parameter configuration. However, these configurations define the algorithm behavior once in the beginning and are thus often sub-optimal, since a dynamic adaption of configurations is often required during a run of an algorithm.We propose to \emph{learn algorithm control policies} from data: how can we automatically adapt parameter configurations during the runtime of an algorithm? Researchers showed for evolutionary algorithms and machine learning that algorithm control can in principle improve the performance of algorithms further. However, existing approaches for this algorithm control problem have major limitations in comparison to the state of the art in algorithm configuration: (i) the number of controlled parameters is small (often just one); (ii) only discretized parameter values are considered; and (iii) the learned control policies are trained on a few instances (also often just one) such that they do not generalize well across different instance sets.In the proposed project, we will model the algorithm control as a reinforcement learning (RL) problem, which means that the actions change parameter configurations, and the states correspond to the states of an algorithm solving a problem instance. Deep RL recently showed that challenging problems can be learned, e.g., the game of Go, playing Atari games or Poker. We believe that the recent progress in deep RL will enable us to successfully obtain effective control policies and we dub our approach \emph{deep algorithm control (DAC)}. DAC is similar to these game applications and therefore a promising research direction: (i) we can collect large amounts of training data in an offline phase by evaluating different policies on sets of instances, (ii) the state and action space is in both applications huge, and (iii) deep learning models can predict the performance of AI algorithms.By successfully obtaining effective DAC policies, this approach will be a powerful generalization of other meta-algorithmic approaches, such as algorithm configuration, algorithm selection and their combinations. Thus we believe that DAC is a promising research direction that will have a huge impact on many AI fields such as algorithm configuration and algorithm selection previously had.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung