Thesis etd-01272026-184626 |
Link copiato negli appunti
Thesis type
Tesi di dottorato di ricerca
Author
RUFFINI, FABRIZIO
URN
etd-01272026-184626
Thesis title
Federated Learning of Explainable Artificial Intelligence models: real world applications and theoretical advances
Academic discipline
ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
Course of study
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Supervisors
tutor Prof. Marcelloni, Francesco
supervisore Prof. Ducange, Pietro
supervisore Prof. Ducange, Pietro
Keywords
- AI
- Explainability
- Federated Learning
- Trustworthy
Graduation session start date
18/02/2026
Availability
Full
Summary
This thesis aims to provide a personal contribution to the research area of Trustworthy Artificial Intelligence (AI), focusing on some aspects still largely unexplored.
The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy.
In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately.
Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone.
The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.
The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy.
In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately.
Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone.
The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.
File
| Nome file | Dimensione |
|---|---|
| PhDAIThe..._pdfA.pdf | 9.34 Mb |
Contatta l’autore |
|