| Tesi etd-09062025-105047 | 
    Link copiato negli appunti
  
    Tipo di tesi
  
  
    Tesi di laurea magistrale
  
    Autore
  
  
    DAKA, CLAUDIO  
  
    URN
  
  
    etd-09062025-105047
  
    Titolo
  
  
    Extracting Region-Aware Counterfactual Rules for Model-Agnostic Explainable Artificial Intelligence
  
    Dipartimento
  
  
    INGEGNERIA DELL'INFORMAZIONE
  
    Corso di studi
  
  
    COMPUTER ENGINEERING
  
    Relatori
  
  
    relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
relatore Prof. Alfeo, Antonio Luca
relatore Dott. Gagliardi, Guido
  
relatore Prof. Alfeo, Antonio Luca
relatore Dott. Gagliardi, Guido
    Parole chiave
  
  - Black-box Models
- Counterfactual Explanations
- Explainable AI
- Model-Agnostic Framework
- Rule Extraction
    Data inizio appello
  
  
    02/10/2025
  
    Consultabilità
  
  
    Non consultabile
  
    Data di rilascio
  
  
    02/10/2028
  
    Riassunto
  
  The widespread deployment of complex ”black-box” AI models in critical domains necessitates solutions that ensure trust, accountability, and the ability to detect bias. This need is
further intensified by regulatory frameworks, such as the General Data Protection Regulation (GDPR) and the EU AI Act, which legally mandate transparency and explainability
in automated decision-making processes. This thesis presents a novel, model-agnostic Explainable Artificial Intelligence (XAI) framework designed to extract counterfactual rules
from tabular data. The core of this approach lies in generating human-comprehensible
”IF-THEN” rules. These rules explain how minimal input changes—specifically, the alteration of a single feature’s value—can lead to a different prediction from the target model.
To achieve this, the framework samples instances from diverse regions of the input space.
This process allows for the generation of minimal and region-aware rules that collectively
encapsulate the global decision-making logic of the underlying model. These global rules
can then be further specialized and localized to specific input instances, providing users
with tailored, actionable explanations for individual predictions. Through comprehensive
experiments conducted on multiple benchmark datasets, the proposed method’s performance was evaluated against other state-of-the-art techniques. The results demonstrate
that this framework achieves competitive performance in key metrics, including fidelity
(how accurately the explanation reflects the model’s behavior) and coverage (the portion
of the input instances that the rules can explain). The resulting rules provide actionable
insights that empower users to understand the necessary input modifications for achieving
a desired outcome. Ultimately, this capability is crucial for effective model debugging and
fostering greater confidence in AI systems.
further intensified by regulatory frameworks, such as the General Data Protection Regulation (GDPR) and the EU AI Act, which legally mandate transparency and explainability
in automated decision-making processes. This thesis presents a novel, model-agnostic Explainable Artificial Intelligence (XAI) framework designed to extract counterfactual rules
from tabular data. The core of this approach lies in generating human-comprehensible
”IF-THEN” rules. These rules explain how minimal input changes—specifically, the alteration of a single feature’s value—can lead to a different prediction from the target model.
To achieve this, the framework samples instances from diverse regions of the input space.
This process allows for the generation of minimal and region-aware rules that collectively
encapsulate the global decision-making logic of the underlying model. These global rules
can then be further specialized and localized to specific input instances, providing users
with tailored, actionable explanations for individual predictions. Through comprehensive
experiments conducted on multiple benchmark datasets, the proposed method’s performance was evaluated against other state-of-the-art techniques. The results demonstrate
that this framework achieves competitive performance in key metrics, including fidelity
(how accurately the explanation reflects the model’s behavior) and coverage (the portion
of the input instances that the rules can explain). The resulting rules provide actionable
insights that empower users to understand the necessary input modifications for achieving
a desired outcome. Ultimately, this capability is crucial for effective model debugging and
fostering greater confidence in AI systems.
    File
  
  | Nome file | Dimensione | 
|---|---|
| La tesi non è consultabile. | |
 
		