logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-12152025-235842


Tipo di tesi
Tesi di dottorato di ricerca
Autore
GAUR, MITISHA
URN
etd-12152025-235842
Titolo
The Post Facto Investigation Of Automated Governance Projects: Revealing The Value Of A Sociotechnical Approach
Settore scientifico disciplinare
IUS/02 - DIRITTO PRIVATO COMPARATO
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Comandè, Giovanni
correlatore Rinzivillo, Salvatore
Parole chiave
  • AI Act
  • AI Governance.
  • AI Regulation
  • Algorithmic Accountablity
  • Artificial Intelligence
  • Automated Decision Making
  • Data Governance
  • Fundamental Rights
  • Human Oversight
  • Meaningful Transparency
  • Public Administration
  • Sociotechnical Systems
  • Techno- Pessimism
  • Techno-Optimism
  • Techno-Pragmatism
Data inizio appello
09/01/2026
Consultabilità
Non consultabile
Data di rilascio
09/01/2029
Riassunto
Trustworthy adoption of AI systems within the decision-making pipeline across public authorities (Automated Governance) is a tall order, as public authorities must navigate not only the nuances associated with the adoption of an AI system, which is a disruptive and transformative technology,
but also comply with the administrative law principles that are focused on preserving the rule of law and uphold fundamental rights. This thesis investigates the complex challenges facing public authorities across Europe as they increasingly adopt Automated Governance systems, revealing
that while these systems may promise enhanced efficiency and reduced operational costs, significant impediments plague their sustainable and scalable implementation. These impediments include a lack of meaningful transparency, a lack of meaningful human control, inability to ensure adequate human oversight and a vacuum of participatory citizen-centric design focused on ensuring meaningful interaction between the Automated Governance systems and decision subjects. This thesis further identifies that public authorities are encouraged to chase ‘first-adopter’ status when it comes to embracing Automated Governance systems and have subsequently developed a tunnel vision that is focused primarily on technical elements such as AI algorithms, datasets, technical infrastructure, while neglecting crucial non-technical factors such as workforce
competencies, AI Literacy, process pipelines and organisational structures. This has consequently led to the deterioration of fundamental rights, erosion of administrative discretion, and loss of public trust.

Moving beyond the dominant approaches of techno-optimism (viewing Automated Governance as a panacea to administrative inefficiency) and techno-pessimism (advocating against the use of Automated Governance systems by critically analysing the adverse impacts associated with their
adoption), this study advocates for a techno-pragmatic approach grounded in the theory of sociotechnical systems (STS), which recognises three interconnected elements within institutions, namely (1) technological elements (AI algorithms, data, and infrastructure), (2) organisational
elements (organisational structures, AI governance policies and risk-mitigation mechanisms), and (3) social elements (AI literacy levels, human autonomy, and behavioural factors). The research study undertakes a mixed-methods approach combining comprehensive desk research and use-
case analysis of five (5) prominent instances of AI failures across public authorities, i.e. (i) Dutch Taxation Authority's Systeem Risico Indicatie (SyRI), (ii) Trelleborg Municipality's welfare assessment system, (iii) the UK Post Office's Horizon Software, (iv) the UK Department for Work
and Pensions' Fraud Risk Model, and (v) the UK Home Office's visa screening system based on seven (7) key assessment requirements derived from the combined reading of the EU HLEG Ethics Guidelines and the EU AI Act. Subsequently, this research study identifies four (4) core
issues plaguing the adoption of Automated Governance systems, namely, (1) broad motivations with inadequate planning, (2) inadequate internal AI governance mechanisms, (3) insufficient meaningful transparency, and (4) systematic exclusion of decision-subjects from the development
and deployment of Automated Governance. The study examines the EU AI Act as the primary regulatory framework, finding that while it addresses many concerns and embodies inherent sociotechnical parity across its provisions, it primarily focuses on high-risk AI systems, leaving gaps in regulatory coverage for non-high-risk AI systems that may be adopted by public authorities and may pose risks to health, safety and fundamental rights despite their low risk-classification. To address these challenges, the thesis proposes a comprehensive sociotechnical framework encompassing three (3) core building blocks, namely, (i) meaningful transparency, (ii) algorithmic accountability, and (iii) human oversight and recommends institutional best practices, including treating AI adoption as a comprehensive sociotechnical endeavour, establishing participatory and citizen-centric AI systems, and bridging accountability gaps through enforced meaningful transparency requirements. The research contributes to AI governance knowledge by providing empirical evidence of AI failures, demonstrating practical application of sociotechnical systems theory, and offering actionable recommendations for public authorities seeking safe and trustworthy AI adoption while identifying regulatory gaps and proposing institutional safeguards that go beyond current provisions of the EU AI Act, ultimately arguing that successful AI adoption across public authorities requires moving toward a pragmatic, sociotechnically cognisant approach that creates a synergetic human-AI partnership while actively preserving democratic principles and fundamental rights.
File