Project Details
Generating Explanations and Suggestions to Mitigate Explainability Requirements (softXplainer)
Applicant
Professor Dr. Kurt Schneider
Subject Area
Software Engineering and Programming Languages
Term
since 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 470146331
Software is getting increasingly complex, and it is sometimes difficult to use. Users do not understand what the software does, and why. This is often due to a clash of user expectation with perceived system behavior. End-users are exposed to complex software behavior without a human expert available who could answer their questions. Explanations provided by the software could mitigate that confusion. Explainability refers to the ability of software to explain its own behavior. High-quality requirements for explainability are an important pre-requisite for useful and self-explainable software. The first phase of this project dealt with requirements for explainability: What can reasonably be expected, and, thus, required in terms of explanations? In this extension, the focus is shifted from requirements to the foundations of run-time support for explainability: Requirements are related to heuristics which were collected in the first phase. Some heuristics define triggers, defining when an explanation should be given. Other heuristics determine how an explanation should look like. And a third type of heuristics recommends what type of explanation should be given for a certain user need. Such a set of heuristics is bundled in a so-called Explainer. Several Explainers together can be enacted in order to generate adequate explanations when they are needed. Telemetry data are used as input to trigger explanations. User behavior reflects how well users understand and operate software: Do they hesitate, cancel operations, or show signs of confusion? The project investigates how Telemetry data for user behavior can be exploited for run-time triggering of explanations. Telemetry data is collected and processed in real-time or near real-time. This project adds advanced concepts, such as Explainers, and techniques, such as behavior analysis, Telemetry, and LLMs to the toolbox of explainability. Providing adequate explanations is the main goal of explainability. A new variant of mitigation is added: Explaining to developers how they can improve the software at design time. Explainability has gained growing research attention during the last years. While explainable AI (XAI) mostly refers to interpretability as a technique for developers to improve their AI algorithms, we focus on explaining any complex software to users. We measure our success through empirical studies and the assessment by practitioners.
DFG Programme
Research Grants
