Tesi etd-01272026-184626 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
RUFFINI, FABRIZIO
URN
etd-01272026-184626
Titolo
Federated Learning of Explainable Artificial Intelligence models: real world applications and theoretical advances
Settore scientifico disciplinare
ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Prof. Marcelloni, Francesco
supervisore Prof. Ducange, Pietro
supervisore Prof. Ducange, Pietro
Parole chiave
- AI
- Explainability
- Federated Learning
- Trustworthy
Data inizio appello
18/02/2026
Consultabilità
Completa
Riassunto
This thesis aims to provide a personal contribution to the research area of Trustworthy Artificial Intelligence (AI), focusing on some aspects still largely unexplored.
The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy.
In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately.
Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone.
The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.
The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy.
In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately.
Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone.
The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.
File
| Nome file | Dimensione |
|---|---|
| PhDAIThe..._pdfA.pdf | 9.34 Mb |
Contatta l’autore |
|