Project Details
Projekt Print View

Explainable artificial intelligence for fault diagnosis: Impacts on human diagnostic processes and performance

Subject Area Human Factors, Ergonomics, Human-Machine Systems
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Term since 2021
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 468325352
 
Fault diagnosis in industrial settings is a challenging task. Although it can be supported by machine learning (ML), human-machine cooperation is essential to monitor and evaluate ML algorithms. However, this is hampered by the fact that ML relies on black box models. To increase its transparency, explainable artificial intelligence (XAI) can indicate which inputs a ML has used to compute a solution. For instance, this can be achieved by highlighting the specific areas in images that were attended by the ML algorithm. Previous research has revealed benefits and pitfalls of XAI in other task contexts, but it is unclear how XAI affects human diagnostic processes and performance during fault diagnosis. Specifically, it needs to be investigated under what conditions XAI helps people to critically evaluate the ML results or leads them to over-rely on incorrect explanations. The present project investigates how diagnostic processes and performance are affected by XAI that explains ML outcomes on three levels: anomaly detection, fault classification, and fault diagnosis. XAI for detection and classification is implemented by highlighting areas in product images that were attended by the algorithm. XAI for diagnosis informs people which process parameters from the previous production step it has used. In a computer-based chocolate production scenario, participants either receive XAI information or are only informed about the results of ML algorithms. Their task is to evaluate these results. Besides the presence of XAI, we vary the correctness of ML and the difficulty of the task. To assess diagnostic performance, we analyse solution times and correctness of participants’ response. To assess diagnostic processes, we analyse eye movements and diagnostic actions aimed at cross-checking the ML results. We hypothesise that XAI improves diagnostic speed when ML results are correct, and diagnostic accuracy when they are incorrect. However, we expect these effects to depend on the type of ML error. Specifically, we hypothesise that participants tend to over-rely XAI when additional faults besides the highlighted one are present. Moreover, we expect the effects of XAI to vary with task difficulty. To test these hypotheses, we conduct three experiments, one for each level of ML and XAI (i.e., detection, classification, diagnosis). In addition to these experiments, we conduct a pilot study to select suitable stimuli and a user study to investigate the interpretability of XAI outputs. Finally, in a field study we investigate to what degree our experimental results can be transferred to expert fault diagnosis in a real plant. Taken together, these studies describe how diagnostic performance is affected by XAI and explain these effects by providing insights into the underlying diagnostic processes.
DFG Programme Research Grants
Co-Investigator Dr. Romy Müller
 
 

Additional Information

Textvergrößerung und Kontrastanpassung