logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-10292025-162750


Tipo di tesi
Tesi di laurea magistrale
Autore
CARELLA, ALESSANDRO
URN
etd-10292025-162750
Titolo
Interactive visual interfaces for synthetic neighbourhoods and rule based explanations
Dipartimento
INFORMATICA
Corso di studi
DATA SCIENCE AND BUSINESS INFORMATICS
Relatori
relatore Prof. Rinzivillo, Salvatore
Parole chiave
  • bidirectional coordination
  • details on demand
  • explainable artificial intelligence
  • human-AI interaction
  • surrogate model
  • synthetic neighborhoods
  • visual analytics
Data inizio appello
04/12/2025
Consultabilità
Non consultabile
Data di rilascio
04/12/2028
Riassunto
As artificial intelligence systems make increasingly consequential decisions in critical domains such as healthcare, finance, and others, AI models need transparent explanations they can understand and trust. The evolving field of eXplainable AI tackles this need, aiming to make those explanations cognitively accessible and confirmable by human users. The objective of this thesis is the development of an interactive visualization that reshapes complex eXplainability methods' outputs into an easier to digest format through intuitive visual interfaces. The proposed system targets local XAI methods that provide explanations through the generation of a neighborhood around the explained instance, and a surrogate model from which the rules are extracted. A scatter plot projection shows how the model organizes instances in the projected 2D space, and a parallel and interconnected decision tree diagram reveals the logical rules underlying predictions to the user. The two visualizations are conceptualized to empower the users to explore the results via an interactive dialogue between spatial intuition and logical reasoning. The system offers multiple tree visualization options tailored to different needs, which is the result of both initial conceptualization and its expansion based on the surveyed literature. The proposed use cases demonstrate how the system supports different iterative exploration workflows where users can adjust neighborhood generation parameters, regenerate explanations, and validate findings across different instances. This work tries to bridge the gap between powerful explanation algorithms and human understanding, providing tools to interpret, validate, and communicate machine learning decisions. The system, in the current state, uses the LOREsa eXplainability method to generate the explanations, but further implementations using other state of the art methods, such as LIME and SHAP, would be easily implementable.
File