Project Details
Projekt Print View

Trustworthy Reinforcement Learning for Multi-Agent Systems: Foundations of Robust and Accountable Decision Making

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 467367360
 
Reinforcement learning is a computational approach to modeling and automating sequential decision making under uncertainty. Recent advancements in reinforcement learning have highlighted the incredible potential of utilizing this approach in high-stake domains, such as recommendation systems, transportation, or education. However, many concerns have been raised regarding the applicability of the state-of-the-art reinforcement learning techniques to high stake domains, as they fail to account for complexities present in real-world scenarios, including their multi-agent structure.In this project, we propose a framework for designing trustworthy and reliable reinforcement learning algorithms. We identify two important components in designing such a framework, agent design (designing reinforcement learning algorithms themselves) and system design (designing supporting tools that improve the learning processes of agents). Within each of these components, we will study two important properties that such a framework ought to have in order to be deemed trustworthy: robustness (ability to deal with adversaries and uncertainty) and accountability (ability to provide an account for one’s behavior). In the context of robustness, this proposal explicates the design of agents that are robust to the presence of other agents, including adversaries, as well as the design of systems that support robust learning through channeling trusted information. In the context of accountability, this proposal explicates the design of agents that can provide explanations for their actions, as well as the design of systems that support accountability by assigning responsibility for the outcomes. The proposal further outlines the agenda for resolving technical challenges related to these four research directions, and proposes approaches that rely on multi-agent learning, reinforcement learning, and game theory. The proposed agenda focuses on theoretical and algorithmic aspects of robust and accountable decision making that provide foundational steps toward trustworthy reinforcement learning for multi-agent systems.
DFG Programme Independent Junior Research Groups
 
 

Additional Information

Textvergrößerung und Kontrastanpassung