logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-12312025-130835


Tipo di tesi
Tesi di dottorato di ricerca
Autore
POE, ROBERT LEE
URN
etd-12312025-130835
Titolo
The Complexity of Value Alignment: Discrimination in Automated Distributive Decisions
Settore scientifico disciplinare
IUS/02 - DIRITTO PRIVATO COMPARATO
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Comandè, Giovanni
relatore Ruggieri, Salvatore
Parole chiave
  • algorithmic bias
  • distributive justice
  • fair machine learning
  • non-discrimination
Data inizio appello
09/01/2026
Consultabilità
Non consultabile
Data di rilascio
09/01/2029
Riassunto (Inglese)
Riassunto (Italiano)
The intersection of artificial intelligence (AI) and governance increasingly shapes how distributive decisions, those allocating benefits or burdens among individuals and groups, are made. As AI systems are progressively integrated into critical societal functions such as hiring, credit allocation, and resource distribution, the concept of value alignment becomes a focal point of governance. Specifically, value alignment involves ensuring that automated decisions are consistent with legal and ethical frameworks. However, the pace of technological advancement in AI systems significantly outstrips the evolution of regulatory and ethical governance mechanisms, resulting in considerable complexity and potential conflicts, particularly in the domain of discrimination.

This work addresses the nuanced complexities surrounding the value alignment of automated distributive decisions, focusing explicitly on discrimination as understood and regulated by the European Union's legal framework. The research investigates whether and how automated decision-making systems can comply with established principles of non-discrimination law, highlighting significant challenges presented by the current practices within the AI community, especially in the field known as fair machine learning.

At its core, this research critiques prevailing methodologies in AI fairness, which frequently aim to achieve statistical parity or group outcome similarity by manipulating datasets to eliminate disparities correlated with protected characteristics such as race, gender, or ethnicity. Such approaches, although motivated by ethical intentions, risk violating the very legal principles they intend to uphold. This work argues that adherence to these methodologies may unintentionally result in discrimination by undermining the legally established principle of equal treatment. In other words, what AI practitioners often label as ``debiasing" can, paradoxically, perpetuate discrimination under European Union law.

A central claim of this research is the necessity of a robust understanding and alignment with fundamental rights jurisprudence, particularly as articulated by the Court of Justice of the European Union (CJEU). Non-discrimination in the European legal context, as encapsulated in Article 21 of the Charter of Fundamental Rights, hinges on the principle of proportionality, which seeks a delicate balance between formal equality—treating similar situations alike—and substantive equality, which sometimes permits differential treatment to redress historical or structural disadvantages. The European Union’s nuanced approach differs critically from simplistic statistical parity measures predominant in AI ethics practices.

The complexity of aligning AI systems with these principles arises not merely from technical difficulties, but also from fundamental misunderstandings and misconceptions within the AI community about the nature of discrimination itself. AI practitioners often conceptualize bias narrowly, focusing exclusively on disparities in outcomes without sufficient consideration of the underlying legal doctrines that define lawful and unlawful discrimination. This work clarifies this conceptual misunderstanding by distinguishing between two critical interpretations of bias: (1) deviation from the accurate estimation of a parameter, and (2) deviation from equal or similar outcomes among groups based on a protected feature. The European Union’s legal standards prioritize accuracy in individual assessment rather than simply mandating equal outcomes, highlighting a significant misalignment with prevalent fair machine learning practices.

In exploring these complex intersections, this research advances a theoretical framework termed ``Distributive Decision Theory," providing a structured approach to analyzing automated distributive decisions. By clearly defining key components such as the ``distributor" (the AI system), the ``distribuendum" (benefit or burden), and the ``pattern of distribution," this framework allows for systematic examination of both ethical intentions and legal compliance. The practical relevance of this framework is demonstrated through critical analysis of automated hiring systems, a prominent and increasingly common application of AI technologies. The research identifies specific scenarios wherein attempts to ensure fairness through common algorithmic adjustments can inadvertently lead to unlawful discrimination, violating fundamental principle of equal treatment as interpreted by the CJEU.

Additionally, the work engages with the European Union’s regulatory response to these challenges, particularly the Artificial Intelligence Act. This legislation seeks to establish harmonized rules on AI, categorizing systems according to risk levels, and outlining mandatory requirements to safeguard fundamental rights. It provides a detailed descriptive analysis of the Act, carefully examining its regulatory development, definitions of AI systems, and the structured approach to risk classification—including prohibited practices such as social scoring systems. It further explains the legal boundaries established for processing sensitive group membership data within AI applications. Through this descriptive overview, the work clarifies how the AI Act aims to systematically integrate considerations of trustworthiness and fundamental rights protection into the governance of artificial intelligence across Europe.

Ultimately, this work attempts to make a substantial contribution by illuminating the disconnect between current practices in fair machine learning and established legal principles, providing both a theoretical framework and practical guidance for more effective and lawful integration of AI in distributive decision-making processes. By grounding discussions of discrimination firmly within the European Union's jurisprudence, this work aims for conceptual clarity and offers pathways toward the development of genuinely trustworthy AI systems—ones that respect and uphold the fundamental rights upon which European law is founded.
File