Project Details
TRR 318: Constructing explainability
Subject Area
Computer Science, Systems and Electrical Engineering
Humanities
Social and Behavioural Sciences
Humanities
Social and Behavioural Sciences
Term
since 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 438445824
The scope of the EU right to explanation has fueled the need to improve eXplainable Artificial Intelligence capacities aiming at strengthening the rights of individuals affected by AI-based recommendations. Among other purposes, explanations serve the right to contest an AI output and protect humans from being left out of control. However, explanations can only be functional if they are relevant. Yet, the current state of the art in XAI, is criticized for being driven by the requirements of developers rather than those of users (explainees). A key challenge is, thus, to make an explanation relevant for a particular explainee. For the first funding period, TRR proposed taking co-constructive interaction as an approach in XAI. We carried out theoretical and empirical basic research to understand the explanation processes and the involvement of explainees in human–human and human–AI interactions. We obtained insights into how explanatory dialogs are structured and unfold, and how users can be modeled to adapt explanations. Building on these insights, we have developed first XAI systems that involve the users: These interact co-constructively and adapt the explanation process incrementally. We further asked whether and in what situations users care about understanding AI’s function and outputs. Our results show that users’ explanatory needs are diverse and change dynamically. Our research on the explainees’ active involvement in the explanation process and the relevant social aspects builds a strong foundation for the paradigm of sXAI (social XAI). We position ourselves as pushing explainable AI toward the development of explaining AI systems that provide an environment–a context that emerges during interaction–in which users can exercise their agency and build knowledge. For the second funding period, we recognize that explanations need to be assessed within the context of explaining. Therefore, we will focus on the relevance of explanations and endow explaining algorithms with the ability to proactively and jointly construct a shared context and establish the relevant factors together with the explainee. Our innovative proposal is that the context itself is co-constructed as part of the interaction. Driven by AI’s growing ubiquity, our aim is to develop context-aware XAI systems that can co-construct the relevant context incrementally with the user both in and from an interaction. Therefore, we are extending our sXAI approach with a systematization into four types of contexts guiding us toward more relevant, flexible, and versatile XAI applications in contrast to current technology. In the long term, we envision investigating autonomous co-constructive XAI in the real world to gain insights into how human interlocutors (co-)adapt their behavior in co-constructive sociotechnical settings, thereby generating novel practices. These insights will complement our theoretical work concerned with a critical reflection on co-construction as a process.
DFG Programme
CRC/Transregios
Current projects
- A01 - Adaptive explanation generation (Project Heads Buhl, Heike M. ; Kopp, Stefan ; Rohlfing, Katharina )
- A02 - Monitoring the understanding of explanations (Project Heads Buschmeier, Hendrik ; Grimminger, Angela ; Wagner, Petra )
- A03 - Co-constructing explanations between AI-explainer and human explainee under arousal or nona-rousal (Project Heads Thommes, Kirsten ; Wrede, Britta )
- A04 - Co-constructing duality-enhanced explanations (Project Heads Buhl, Heike M. ; Kern, Friederike ; Schulte, Carsten )
- A05 - Contextualized and online parametrization of attention in human–robot explanatory dialog (Project Heads Rohlfing, Katharina ; Scharlau, Ingrid ; Wrede, Britta )
- A06 - Explaining the multimodal display of stress in clinical explanations (Project Heads Drimalla, Hanna ; Wagner, Petra )
- B01 - A dialog-based approach to explaining machine-learning models (Project Heads Cimiano, Philipp ; Esposito, Elena ; Ngonga Ngomo, Axel-Cyrille )
- B05 - Co-constructing explainability with an interactively learning robot (Project Heads Schulte, Carsten ; Vollmer, Anna-Lisa )
- B06 - Ethics and normativity of explainable AI (Project Heads Alpsancar, Suzana ; Matzner, Tobias )
- B07 - Communicative practices of requesting information and explanation from LLM-based agents (Project Heads Buschmeier, Hendrik ; Kern, Friederike )
- C01 - Explanations for healthy distrust in large language models (Project Heads Hammer, Barbara ; Paaßen, Benjamin ; Scharlau, Ingrid )
- C02 - Interactive learning of explainable, situation-adapted decision models (Project Heads Hüllermeier, Eyke ; Thommes, Kirsten )
- C03 - Interpretable machine learning: Explaining change (Project Heads Hammer, Barbara ; Hüllermeier, Eyke )
- C04 - Metaphors as an explanation tool (Project Heads Scharlau, Ingrid ; Wachsmuth, Henning )
- C05 - Creating explanations in collaborative human–machine knowledge exploration (Project Heads Cimiano, Philipp ; Kopp, Stefan )
- C07 - Co-construction-following large language models for explaining (Project Heads Ngonga Ngomo, Axel-Cyrille ; Wachsmuth, Henning )
- INF - Retrieval-augmented information provision (Project Heads Cimiano, Philipp ; Ngonga Ngomo, Axel-Cyrille ; Wachsmuth, Henning )
- MGK - Integrated Research Training Group (Project Head Scharlau, Ingrid )
- WIKO - Questions about explainable technology (Project Heads Horwath, Ilona ; Schulte, Carsten ; Verständig, Dan ; Wrede, Britta )
- Z - Central tasks of the Collaborative Research Center (Project Head Rohlfing, Katharina )
Completed projects
Applicant Institution
Universität Paderborn
Co-Applicant Institution
Universität Bielefeld
Participating Institution
Gottfried Wilhelm Leibniz Universität Hannover; Ludwig-Maximilians-Universität München; Technische Universität Berlin
Spokesperson
Professorin Dr. Katharina Rohlfing
