Tesi etd-08212025-222319 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
MARRUFO ARANDA, ANDREA CAROLINA
URN
etd-08212025-222319
Titolo
Automated Authority: Re-evaluating the Legal and Ethical Legitimacy of Human Oversight in AI-Assisted Decisions in Employment
Dipartimento
GIURISPRUDENZA
Corso di studi
DIRITTO DELL'INNOVAZIONE PER L'IMPRESA E LE ISTITUZIONI
Relatori
relatore Prof. Passaglia, Paolo
Parole chiave
- accountability
- AI act
- algorithmic decision making
- anchoring bias
- artificial intelligence
- human oversight
- non-discrimination
Data inizio appello
15/09/2025
Consultabilità
Non consultabile
Data di rilascio
15/09/2065
Riassunto
In recent years, there has been a rapid integration of Artificial Intelligence (AI) into high-stakes employment decisions. This has generated significant debate on how to mitigate potential bias in AI systems used in decision-making processes. While the AI Act addresses the use of AI in high-stakes employment decisions, this dissertation focuses on the “Human-in-the-Loop” requirement in European law. It examines how this mechanism, intended to safeguard the fairness and non-discrimination principles of EU law, may, in practice, produce an “anchoring bias,” a phenomenon documented in research where a manager’s judgment is influenced by an AI-generated recommendation. This risk is compounded when the AI system itself incorporates biased recommendation algorithms, meaning that the human overseer’s decision is shaped not only by the AI’s suggestion but also by any discriminatory patterns embedded in the system’s outputs.
The main argument and research focus on exploring the current regulatory framework in Europe and how it may not be adequately designed, as it overlooks the psychological reality that humans frequently fail to calibrate their dependence on AI during decision-making. This research adopts a doctrinal legal analysis and a policy-oriented critique, highlighting the need to re-evaluate the legal and ethical legitimacy of relying on human oversight for high-risk employment systems. It argues that this approach creates a significant regulatory gap, allowing organizations to justify the poor treatment of individuals based on an AI’s average-case performance, while failing to protect those who are, through no fault of their own, exceptions to a data-driven rule.
To address this regulatory gap, the dissertation proposes shifting from a sole reliance on human oversight to the introduction of specific obligations for companies, organizations, and AI providers to implement debiasing strategies, such as the “consider the opposite” approach, which academic research has shown can reduce the influence of AI-generated anchors. In conclusion, it explores how the AI Act could be amended to redefine “accountability” and ensure that high-stakes decisions optimized by AI systems also safeguard the individual’s right to non-discrimination and the right not to be judged solely by a machine.
The main argument and research focus on exploring the current regulatory framework in Europe and how it may not be adequately designed, as it overlooks the psychological reality that humans frequently fail to calibrate their dependence on AI during decision-making. This research adopts a doctrinal legal analysis and a policy-oriented critique, highlighting the need to re-evaluate the legal and ethical legitimacy of relying on human oversight for high-risk employment systems. It argues that this approach creates a significant regulatory gap, allowing organizations to justify the poor treatment of individuals based on an AI’s average-case performance, while failing to protect those who are, through no fault of their own, exceptions to a data-driven rule.
To address this regulatory gap, the dissertation proposes shifting from a sole reliance on human oversight to the introduction of specific obligations for companies, organizations, and AI providers to implement debiasing strategies, such as the “consider the opposite” approach, which academic research has shown can reduce the influence of AI-generated anchors. In conclusion, it explores how the AI Act could be amended to redefine “accountability” and ensure that high-stakes decisions optimized by AI systems also safeguard the individual’s right to non-discrimination and the right not to be judged solely by a machine.
File
| Nome file | Dimensione |
|---|---|
La tesi non è consultabile. |
|