Project Details
Projekt Print View

Prediction Games: Robust, Parallel Machine Learning Algorithms

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2010 to 2019
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 173120778
 
Machine learning addresses problem settings that involve automatic model building from data and predicting the future behavior of the system that is reflected in the data. Most results of this field are based on the assumption that available data and future behavior of the system are governed by the same probability distribution. This assumption is an oversimplification for applications in which an active adversary can influence the future behavior of the system. This is the case, for instance, for detecting phishing emails and fraudulent credit card transactions. In the preceding project, we have modeled adversarial learning problems using paradigms of game theory. We have been able to identify conditions under which non-zero-sum prediction games have a unique equilibrium point; such points are an optimal solution when learner and adversary both aim at minimizing their own cost functions. We have derived primal and dual learning algorithms for static prediction games in which learner and adversary act simultaneously, without knowing the action which their respective opponent chooses. We also derived learning methods for learning problems in which the adversary can react on the model chosen by the learner and for learning problems in which the learner has some uncertainty about the exact cost function which the adversary is trying to minimize. In the context of email spam filtering, we can observe empirically that predictive models generated by game-theoretic learning algorithms maintain high accuracy for longer periods of time than models generated by learning algorithms that do not account for an active adversary. However, game-theoretic learning algorithms have to solve complex optimization problems; these methods are not immediately practical for large data sets. In the succeeding project, we want to focus on scalable solutions to robust learning problems that can be executed in parallel on GPU and cluster architectures. The highest degree of parallel execution can be attained by algorithms that first solve subproblems entirely parallel, and then aggregate the solutions to these subproblems into a total model in a final, single aggregation step. However, not all optimization problems can be solved by algorithms which have this structure. The goals of the project therefore are the theoretical analysis of this approach to parallel, robust learning and the development and empirical analysis of scalable, parallel, robust - in particular, game theoretical - machine learning algorithms. The analysis of the convergence of algorithms which follow this structure towards the optimal solution of the underlying optimization problem is a central element of investigation. We will explore several approaches to splitting robust learning problems into subproblems that can be solved in parallel. We will study properties of parallel solutions to zero-sum, static non-zero-sum, and Stackelberg prediction games both theoretically and empirically.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung