logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-08262025-173334


Tipo di tesi
Tesi di laurea magistrale
Autore
LEYVA FERZULI, OSCAR JOSAFAT
URN
etd-08262025-173334
Titolo
Military AI in the EU: Identifying Key Gaps and Risks, and Exploring Pathways to Effective Governance
Dipartimento
GIURISPRUDENZA
Corso di studi
DIRITTO DELL'INNOVAZIONE PER L'IMPRESA E LE ISTITUZIONI
Relatori
relatore Prof. Marinai, Simone
correlatore Dott. Celeste, Edoardo
Parole chiave
  • artificial intelligence
  • digital sovereignty
  • European Union
  • governance
  • military AI
  • strategic autonomy
Data inizio appello
15/09/2025
Consultabilità
Completa
Riassunto
While the EU has made significant strides in regulating AI within the civilian sector, particularly through the Digital Strategy and the recently adopted AI Act, its approach to military AI remains fragmented, underdeveloped, and politically sensitive. This gap not only undermines the EU’s credibility as a global standard-setter in AI but also creates legal uncertainty, operational risks, and potential inconsistencies with the Union’s stated values and strategic ambitions. Calls for military AI regulation, through a clear legal framework to ensure that all AI use in defence complies with international humanitarian law and EU values are increasing. Some even look for a ban on lethal autonomous weapon systems (LAWS) without meaningful human control.
This thesis traces the parallel evolution of the EU’s Digital and Defence Strategies, highlighting their growing convergence over the last decade. Initially separate domains, these strategies now share a common dependence on key enabling technologies, such as AI. This increasing overlap becomes particularly evident in policy documents that stress digital sovereignty, technological independence, reduced strategic dependencies, and the ethical use of AI. These concepts are no longer confined to civilian innovation or economic competitiveness but now permeate the EU’s vision for collective defence, resilience, and strategic autonomy.
However, the thesis analyses the fact that this convergence has not been matched in governance. While the digital domain benefits from an increasingly mature legal architecture, including binding obligations on risk classification, transparency, and human oversight, the defence side operates largely through intergovernmental cooperation and voluntary commitments. Military AI capabilities remain beyond the reach of common regulatory standards. This creates a dual governance structure within the EU that risks undermining the legal coherence of its internal market and the credibility of its ethical posture in international forums.
To map the existing governance landscape, this research undertakes a chronological review and reviews EU policy documents and programmes. This thesis further identifies the critical risks and governance gaps: regulatory fragmentation among Member States; lack of harmonised definitions and standards; insufficient provisions for transparency, accountability, and traceability; and the opacity of decision-making in AI systems. These shortcomings are compounded in military contexts by national security exemptions under Article 4(2) TEU, secrecy requirements, and complex chains of responsibility. Without legal clarity, the EU risks enabling the deployment of military AI systems without meaningful accountability, an outcome that is at odds with the normative ambitions of both its digital and defence strategies.
The thesis also evaluates existing instruments and institutional competences that could serve as vehicles for future military AI governance. These include the European Defence Fund, the Permanent Structure Cooperation and the Common Foreign and Security Policy, among others. It explores options to initiate a legal pathway while respecting Member States’ sovereignty in defence matters. A hybrid models based on existing structures could be a preliminary step.
Ultimately, the research argues that the EU cannot hope to lead globally on AI governance, or ensure internal legal consistency, without addressing its military AI dimension. The same foundational technologies underpin both defence capability and digital competitiveness; both strategies rely on responsible innovation, trustworthy AI, and value-driven development. Excluding the defence sector from the EU’s legal and ethical frameworks would not only weaken strategic autonomy but erode the EU’s credibility as a principled global actor.
While this thesis does not attempt to produce a legislative act, it identifies the critical areas where legal oversight is necessary and offers a conceptual foundation for future regulation. It aims to contribute to the research and policy discourse by clarifying the strategic, legal, and ethical implications of military AI in the EU context. A more in-depth investigation may be required to examine specific regulatory components in detail, which could form the basis of future research.
File