logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-01282026-194622


Tipo di tesi
Tesi di dottorato di ricerca
Autore
FONTANA, MICHELE
URN
etd-01282026-194622
Titolo
Optimizing Fairness in Federated Learning: Balancing Fairness and Performance under Budget Constraints ​
Settore scientifico disciplinare
INF/01 - INFORMATICA
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Prof.ssa Monreale, Anna
supervisore Dott.ssa Naretto, Francesca
supervisore Dott. Nanni, Mirco
Parole chiave
  • ethical ai
  • fairness
  • federated learning
Data inizio appello
18/02/2026
Consultabilità
Non consultabile
Data di rilascio
18/02/2029
Riassunto
Machine Learning (ML) is now widely used in domains such as finance, healthcare, and criminal
justice, where algorithmic decisions directly affect opportunities, rights, and access to resources.
These systems can achieve remarkable accuracy, but models trained on biased data may also
reproduce and amplify existing inequalities. Ensuring that ML is not only accurate but also fair
has therefore become a pressing concern. At the same time, legal and ethical restrictions often
prevent the centralization of sensitive data, which has led to the rise of Federated Learning (FL),
a paradigm that enables collaborative training without sharing raw data. Although FL limits
the need to centralize data by keeping records local, it raises distinctive challenges for fairness:
clients typically hold non-IID data, meaning that their datasets differ in size, composition, or
distribution, which can exacerbate disparities. Moreover, fairness must be considered at multiple
levels, and classical definitions and mitigation strategies often fail to capture the complexity of
real-world applications. In this thesis we treat fairness as a first-class objective in FL, on par with
performance. We address four main challenges: (i) the tension between global fairness, measured
across the federation as a whole, and local fairness, measured within individual clients; (ii) the
extension of fairness beyond binary classification and single attributes; (iii) the simultaneous
enforcement of multiple fairness constraints; and (iv) the lack of explicit control over the tradeoff
between fairness and predictive performance.
To this end, we develop a progression of methods that gradually expand the scope of fairnessaware
FL. GLOFAIR introduces an approach that balances fairness and performance when enforcing
a finite set of group fairness constraints in FL, but the trade-off is determined implicitly and
cannot be fixed in advance. Building on this limitation, FairLAB is developed in a centralized
setting. It extends the methodology to multiple and intersectional constraints and introduces
the idea of a performance budget, which allows practitioners to decide beforehand how much
accuracy they are willing to trade for fairness. In this way, FairLAB transforms the implicit
compromise of GLOFAIR into a tunable design choice. We then present FeDist, a strategy for
transferring knowledge across models in FL. Although focused on performance rather than fairness,
it provides the technical building block that is later extended to fairness-aware training.
Finally, FedFairLAB combines the principles of FairLAB and FeDist to enforce multiple, intersectional,
and multiclass fairness constraints under realistic federated conditions. By doing
so, it provides fairness guarantees while maintaining strong predictive performance and explicit
control over performance budgets.
This work advances the path towards trustworthy FL, embedding fairness into the learning
process and complementing the inherent data-locality benefits of the FL paradigm to enable
responsible deployment in high-stakes domains.
File