Project Details
Projekt Print View

Benchmarking debiasing methods for artificial intelligence in neuroimaging research

Applicant Dr. Didem Stark
Subject Area Clinical Neurology; Neurosurgery and Neuroradiology
Term since 2025
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 565037378
 
This project aims to address the critical issue of bias and fairness in artificial intelligence (AI) applications in neuroimaging, with the goal of promoting fairness and equity in healthcare. AI has revolutionized neuroimaging by enabling precise analysis of brain scans for diagnosing neurological and mental disorders. However, these systems often exhibit biases related to demographic factors like sex, age, ethnicity, and socioeconomic status, which can undermine diagnostic accuracy and deepen health disparities. This research aims to systematically benchmark debiasing methods and evaluate them using fairness metrics, which can then be used for developing fairer AI models for clinical use. The study uses a two tier approach, using synthetic and real-world datasets. Synthetic brain images, generated through Latent Diffusion Models (LDMs), allow for controlled testing of fairness metrics and debiasing techniques. Real-world datasets, such as ADNI or UK Biobank, will be used for tasks like brain-age prediction, Alzheimer’s Disease detection, and alcohol use disorder classification using a range of machine learning models, including linear regression, support vector machines (SVM), and convolutional neural networks (CNN). A central focus is the evaluation of debiasing methods that mitigate bias in AI using fairness metrics like equality of opportunity, predictive equality, and equalized odds. These debiasing methods include pre-processing techniques such as data reweighing, in-processing methods like adversarial debiasing and regularization, and post-processing strategies such as equalized odds optimization. Subgroup-specific models and incorporating protected attributes into AI systems will also be explored to assess their potential for achieving fairness. Moreover, an open-source software toolbox will be developed to help researchers and stakeholders evaluate fairness metrics and apply debiasing techniques across datasets and predictive tasks particularly in clinical neuroimaging. By addressing algorithmic bias, this project seeks to enhance the inclusivity and reliability of AI models in clinical neuroimaging. The outcomes will contribute to building ethical AI systems, ensuring better healthcare outcomes for diverse populations and reducing disparities in diagnosis and treatment. I believe this work is a significant step toward responsibly integrating AI into healthcare, especially in clinical neurosciences, while making its benefits accessible to all.
DFG Programme WBP Position
 
 

Additional Information

Textvergrößerung und Kontrastanpassung