logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-03142024-155957


Tipo di tesi
Tesi di laurea magistrale
Autore
ARCANGELI, ANDREA
Indirizzo email
a.arcangeli1@studenti.unipi.it, duecce93@hotmail.it
URN
etd-03142024-155957
Titolo
Convolutional Neural Networks Latent-Space analysis with Reject Option: application to medical images classification
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
INGEGNERIA BIOMEDICA
Relatori
relatore Prof. Vozzi, Giovanni
relatore Prof. Positano, Vincenzo
correlatore De Santi, Lisa Anita
Parole chiave
  • manifold learning
  • latent space
  • explainable artificial intelligence
  • deep learning
  • convolutional neural network
  • cnn
  • artificial intelligence
  • AI
  • medical images classification
  • reject option
  • XAI
Data inizio appello
18/04/2024
Consultabilità
Non consultabile
Data di rilascio
18/04/2094
Riassunto
Deep neural networks demonstrate performance on par with, or better than, clinicians in many tasks due to the rapid increase of available data and computing power. To comply with the principles of trustworthy AI, the AI system must be transparent, robust, fair, and ensure accountability. Therefore, ensuring the interpretability of deep neural networks (DNNs) might be crucial before they can be incorporated into the routine clinical workflow (Salahuddin et al., 2022).
Deep learning (DL) models are comprised of several layers which process and learn data representation across multiple levels of abstraction, without requiring human-engineered features. The multidimensional space generated through the process of feature extraction performed by the network’s layers might be referred to as latent space, or features space, and each dimension of the latent space corresponds to a specific feature or attribute that the DNN has identified within the input data. The nonlinearity and depth of these models, featuring tens or even hundreds of processing layers and thousands to millions of parameters, categorize them as black-box models with opaque internal mechanisms. Convolutional neural networks (CNNs) have emerged as the go-to standard for computer vision problems. Additionally, DL has showcased top-tier performance in numerous medical imaging challenges, especially concerning classification tasks (Jiang et al., 2023). Explainable Artificial Intelligence (XAI) refers to AI solutions that can provide insights into the internal workings of DL models in a manner understandable to the end-user. In the medical field, the purpose of AI is to assist physicians in performing their duties more efficiently and accurately, not to replace them. This collaboration requires trust from clinical experts, and trust is built on understanding (Lipton, 2017).
This thesis introduces and validates a novel reject option strategy designed for deep learning-based classifiers, demonstrating its applicability through implementation in three distinct case studies. The introduced reject option approach leverages Data Point Target Density (DPTD) for feature extraction from the latent space, a measure specifically devised and deployed for this purpose, alongside the k* value of the test sample under evaluation. To facilitate a deeper comprehension of the rejector's decisions, the CNN’s latent space in the last layer before the final classification was extracted and its dimensionality was condensed to a 2-D visualization using the t-SNE manifold learning technique. This visualization incorporated the individual Degree of Locality Preservation (DLP) value, providing insights into the fidelity of each data point's transition from the original high-dimensional space to the reduced embedding space.
File