Tesi etd-07112024-132341 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
BARBIERI, GIOVANNI
URN
etd-07112024-132341
Titolo
Design and testing of a weighted aggregation method for a conterfactual-based feature importance measure and its application to industrial production quality recognition
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
COMPUTER ENGINEERING
Relatori
relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
correlatore Ing. Alfeo, Antonio Luca
correlatore Ing. Alfeo, Antonio Luca
Parole chiave
- explainable artificial intelligence
- industry 4.0
Data inizio appello
26/07/2024
Consultabilità
Non consultabile
Data di rilascio
26/07/2064
Riassunto
Explainable Artificial Intelligence (XAI) is a research area focused on making complex AI models
transparent and interpretable. Explainability is crucial for ensuring trust, fairness, and accountability in
AI-based systems, especially in critical sectors such as healthcare, finance, and law. Among the various
methods proposed to enhance explainability, DiCE (Diverse Counterfactual Explanations) has become
one of the most widely used, despite its high computational load.
My thesis work is based on an innovative method called BoCSoR (Boundary-based Classifier Score
Regression), which offers a significant reduction in terms of computational load compared to DiCE
while maintaining competitive accuracy. BoCSoR bases its explanations on analyzing variations in the
sample classification when their features are changed with values of sample of another class, reducing
the time required to obtain interpretable explanations.
My primary contribution has been the introduction of a new calculation method within the BoCSoR
framework. This method considers the extent to which the class boundary is crossed, further improving
the fidelity of the provided explanations without increasing the computational load. Specifically, I
developed a technique that quantifies the degree of class boundary crossing, integrating this
information into the explanation process.
Experimental results demonstrate that the proposed method not only preserves the computational
efficiency of BoCSoR but also enhances its fidelity.
The datasets of primary importance in this thesis were those are related to the
industry 4.0 sector and more precisely linked to the production and quality of paper. Other benchmark
datasets were also involved in the experimentation, in order to have a greater amount of data to work
with and therefore give solid validity to the results obtained.
This research bridges the gap between explainability and computational efficiency in AI models, offering
a practical and scalable solution for real-world applications. In conclusion, the proposed improvement
in the BoCSoR method represents a significant step towards more transparent and reliable AI systems,
with positive implications for their adoption in critical contexts.
transparent and interpretable. Explainability is crucial for ensuring trust, fairness, and accountability in
AI-based systems, especially in critical sectors such as healthcare, finance, and law. Among the various
methods proposed to enhance explainability, DiCE (Diverse Counterfactual Explanations) has become
one of the most widely used, despite its high computational load.
My thesis work is based on an innovative method called BoCSoR (Boundary-based Classifier Score
Regression), which offers a significant reduction in terms of computational load compared to DiCE
while maintaining competitive accuracy. BoCSoR bases its explanations on analyzing variations in the
sample classification when their features are changed with values of sample of another class, reducing
the time required to obtain interpretable explanations.
My primary contribution has been the introduction of a new calculation method within the BoCSoR
framework. This method considers the extent to which the class boundary is crossed, further improving
the fidelity of the provided explanations without increasing the computational load. Specifically, I
developed a technique that quantifies the degree of class boundary crossing, integrating this
information into the explanation process.
Experimental results demonstrate that the proposed method not only preserves the computational
efficiency of BoCSoR but also enhances its fidelity.
The datasets of primary importance in this thesis were those are related to the
industry 4.0 sector and more precisely linked to the production and quality of paper. Other benchmark
datasets were also involved in the experimentation, in order to have a greater amount of data to work
with and therefore give solid validity to the results obtained.
This research bridges the gap between explainability and computational efficiency in AI models, offering
a practical and scalable solution for real-world applications. In conclusion, the proposed improvement
in the BoCSoR method represents a significant step towards more transparent and reliable AI systems,
with positive implications for their adoption in critical contexts.
File
Nome file | Dimensione |
---|---|
La tesi non è consultabile. |