Tesi etd-11102025-101820 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
CAPPUCCIO, ELEONORA
URN
etd-11102025-101820
Titolo
Design and development of explanation User Interface: Interactive visual Dashboards and Design Guidelines for Explainable AI
Settore scientifico disciplinare
INF/01 - INFORMATICA
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Prof.ssa Lanzilotti, Rosa
supervisore Dott. Rinzivillo, Salvatore
supervisore Dott. Rinzivillo, Salvatore
Parole chiave
- Explainable Artificial Intelligence
- Explanation User Interface
- Human Centered Artificial Intelligence
- Human Centered Design
- Human Computer Interaction
- Machine Learning
- Visual Analytics
Data inizio appello
05/12/2025
Consultabilità
Non consultabile
Data di rilascio
05/12/2028
Riassunto
As artificial intelligence and machine learning systems become increasingly integrated
into critical areas such as healthcare, finance, and public services, the need for trans-
parency in their decision-making processes has become more pressing. While much
of the research in Explainable AI has focused on improving algorithmic transparency,
comparatively little attention has been given to how these explanations are presented
to end-users. This thesis addresses this gap by focusing on the design and development
of Explanation User Interfaces (XUIs), the critical link between algorithmic insights and
human understanding.
Building on Defense Advanced Research Projects Agency (DARPA)’s three-part frame-
work for eXplainable Artificial Intelligence (XAI), which distinguishes between the ex-
plainable model, the user interface, and the psychological needs of the user, this research
positions Explanation User Interfaces as essential to making AI systems not just trans-
parent, but meaningfully interpretable to users. Central to this work is the recognition
that explanation is a design problem as much as it is a technical one; Therefore, ex-
planations must be communicated in ways that align with human reasoning, support
exploration, and foster understanding.
To investigate this challenge, the thesis is structured around four main research con-
tributions. First, a systematic literature review maps the current landscape of Explana-
tion User interface research, highlighting key design considerations and recurring chal-
lenges. This review leads to a set of practical guidelines for designing interfaces that
support effective explanation. Second, the thesis introduces trace , an interactive vi-
sual interface for rule-based explanations, designed with a strong focus on interactivity
and usability. Third, the work explores novel interaction paradigms through tools that
allow users to engage with counterfactual explanations and latent space visualisations,
encouraging hands-on exploration of AI decision boundaries. Finally, the thesis pro-
poses methods for integrating domain knowledge directly into explanation algorithms,
making the output more aligned with human understanding and expert reasoning.
By combining insights from human-computer interaction, visual analytics, and ex-
plainable machine learning, this thesis contributes with a heterogeneous approach to
explainability, one that not only improves transparency but also enhances the usability
and trustworthiness of AI systems. The findings provide both conceptual and practi-
cal tools for advancing the design of explanation interfaces and are intended to support
future research and real-world deployment of human-centred Explainable AI systems.
into critical areas such as healthcare, finance, and public services, the need for trans-
parency in their decision-making processes has become more pressing. While much
of the research in Explainable AI has focused on improving algorithmic transparency,
comparatively little attention has been given to how these explanations are presented
to end-users. This thesis addresses this gap by focusing on the design and development
of Explanation User Interfaces (XUIs), the critical link between algorithmic insights and
human understanding.
Building on Defense Advanced Research Projects Agency (DARPA)’s three-part frame-
work for eXplainable Artificial Intelligence (XAI), which distinguishes between the ex-
plainable model, the user interface, and the psychological needs of the user, this research
positions Explanation User Interfaces as essential to making AI systems not just trans-
parent, but meaningfully interpretable to users. Central to this work is the recognition
that explanation is a design problem as much as it is a technical one; Therefore, ex-
planations must be communicated in ways that align with human reasoning, support
exploration, and foster understanding.
To investigate this challenge, the thesis is structured around four main research con-
tributions. First, a systematic literature review maps the current landscape of Explana-
tion User interface research, highlighting key design considerations and recurring chal-
lenges. This review leads to a set of practical guidelines for designing interfaces that
support effective explanation. Second, the thesis introduces trace , an interactive vi-
sual interface for rule-based explanations, designed with a strong focus on interactivity
and usability. Third, the work explores novel interaction paradigms through tools that
allow users to engage with counterfactual explanations and latent space visualisations,
encouraging hands-on exploration of AI decision boundaries. Finally, the thesis pro-
poses methods for integrating domain knowledge directly into explanation algorithms,
making the output more aligned with human understanding and expert reasoning.
By combining insights from human-computer interaction, visual analytics, and ex-
plainable machine learning, this thesis contributes with a heterogeneous approach to
explainability, one that not only improves transparency but also enhances the usability
and trustworthiness of AI systems. The findings provide both conceptual and practi-
cal tools for advancing the design of explanation interfaces and are intended to support
future research and real-world deployment of human-centred Explainable AI systems.
File
| Nome file | Dimensione |
|---|---|
La tesi non è consultabile. |
|