Project Details
Validating Explainable AI in Clinical Neuroimaging
Applicant
Dr. Marc-André Schulz
Subject Area
Methods in Artificial Intelligence and Machine Learning
Radiology
Radiology
Term
since 2025
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 565356445
The clinical adoption of deep learning in neuroimaging faces a critical barrier: while these models achieve impressive accuracy in tasks like tumor detection and disease classification, their decision-making processes remain largely opaque. Current explainable AI (XAI) methods claim to reveal how neural networks process brain imaging data, but their reliability and clinical validity remain unverified. This lack of validated interpretation methods significantly hinders the translation of deep learning advances into clinical practice. We present a comprehensive framework for validating XAI methods in neuroimaging through three complementary approaches. First, we introduce a novel ground truth validation strategy using imaging-derived phenotypes (IDPs). By systematically removing global brain effects, we create controlled validation targets with known anatomical specificity, enabling objective evaluation of explanation methods. Second, we evaluate XAI methods on distributed disease patterns, particularly focusing on Multiple Sclerosis lesions, which provide naturally occurring validation targets with established anatomical distributions. Third, we leverage brain age prediction as a clinical validation framework, where age-related anatomical changes are well-documented and provide known ground truth for evaluating explanation quality. Our validation framework will analyze up to 100,000 brain MRI scans from major population datasets (UK Biobank, NAKO), enabling systematic investigation of technical robustness, architectural differences, and demographic factors. Through this structured evaluation, we will establish clear criteria for when XAI methods can be trusted to capture genuine neuroanatomical features rather than spurious correlations. The project delivers three key outcomes: (1) a protocol for validating and evaluating XAI methods in neuroimaging, (2) implemented in an open-source computational toolkit optimized for 3D neuroimaging data, and (3) evidence-based guidelines for selecting and applying XAI methods in clinical practice. This work addresses a fundamental challenge in medical AI, potentially accelerating the clinical translation of deep learning advances while ensuring their interpretability and reliability.
DFG Programme
Research Grants
