Project Details
Causally Personalized eXplainable Artificial Intelligence (CPXAI): Leveraging Individual Heterogeneities in AI-Explainability Responses Through Causal Machine Learning
Applicant
Professor Dr. Kevin Bauer
Subject Area
Operations Management and Computer Science for Business Administration
Term
since 2025
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 569214918
The project aims to understand why people benefit differently from explanations provided by AI systems and leverage identified heterogeneities to personalize the AI explanation provision. Although modern AI can achieve very high accuracy, its inner workings are often hidden from users, leading to confusion, lower trust, and even mistakes in decision making. With increasing calls for transparency in AI from both companies and regulators, this project focuses on improving how AI explains its decisions. The first goal is to identify which personal factors, like common biases or tendencies in processing information, determine how well someone can use an AI explanation. The project will build a new theoretical framework that combines insights from Behavioral Economics and Information Systems. This framework will map out how individuals’ cognitive habits (such as overconfidence, neglecting statistical information, or other reasoning errors) influence the way they interpret and benefit from AI explanations. The second goal is to put this understanding into practice by developing a system called Causally Personalized eXplainable AI (CPXAI). This system will use causal machine learning methods to predict which type of explanation (visual, textual, or even no explanation at all) will best support an individual’s decision-making process. The idea is to tailor the way explanations are delivered to each person’s unique information-processing style, ultimately leading to more accurate decisions and better use of AI. To test these ideas, the project is organized into two main work packages. The first package will analyze and model the effects of different information-processing tendencies on the success of AI explanations. The second package will focus on creating, testing, and refining the CPXAI system through controlled experiments, including settings that mimic real-life decisions such as evaluating loan applications or determining trustworthiness in transactions. By understanding how individual heterogeneities shape the usability of AI explanations and more tailor explanations to individual needs using causal machine learning methods, the project hopes to boost trust in AI systems and improve the overall efficiency of decisions made with AI support.
DFG Programme
Research Grants
Co-Investigator
Dr. Moritz von Zahn
