Project Details
Ensuring Fairness in Federated Learning
Applicant
Marco Fisichella, Ph.D.
Subject Area
Methods in Artificial Intelligence and Machine Learning
Computer Architecture, Embedded and Massively Parallel Systems
Computer Architecture, Embedded and Massively Parallel Systems
Term
since 2025
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 560213035
This project titled "Ensuring Fairness in Federated Learning" addresses the critical challenges of fairness in Federated Learning (FL) systems, which enable machine learning models to be built without sharing raw data across different users or organizations. The local, often non-identically distributed (non-IID) datasets at each client pose a challenge for ensuring fairness both on the client side and on the server side. In contrast to traditional measures, which often rely on statistical techniques, we aim to identify the relationships between sensitive attributes (such as gender or ethnicity) and the model's predictions by integrating causal approaches to facilitate the development of fairer FL models. Another key focus is enhancing privacy in FL systems. While FL inherently protects user data by keeping it local and only sharing model updates, privacy risks remain, especially from potential attacks that could compromise sensitive information. The project will investigate how causal methods can bolster privacy protections beyond conventional techniques, such as differential privacy, which adds noise to data. Additionally, the project addresses the challenge of handling multiple sensitive attributes in FL models. Current research often concentrates on single sensitive attributes, neglecting the complexity of real-world scenarios where individuals have multiple characteristics that may influence fairness. The proposal aims to develop new metrics and bias mitigation strategies that consider multiple sensitive attributes, enhancing the applicability and generalizability of fairness solutions.
DFG Programme
Research Grants
