| Tesi etd-09042025-151647 | 
    Link copiato negli appunti
  
    Tipo di tesi
  
  
    Tesi di laurea magistrale
  
    Autore
  
  
    BITONTI, GIOVANNI  
  
    URN
  
  
    etd-09042025-151647
  
    Titolo
  
  
    Deep learning-based prediction of malignancy of massive lesions in mammography and explanation insights
  
    Dipartimento
  
  
    FISICA
  
    Corso di studi
  
  
    FISICA
  
    Relatori
  
  
    relatore Prof.ssa Retico, Alessandra
  
    Parole chiave
  
  - ai
- black-box
- cbis-ddsm
- cdss
- cnn
- dicom
- dl
- gdpr
- grad-cam
- Keras
- ml
- mri
- Python
- tcia
- Tensorflow
- xai
    Data inizio appello
  
  
    22/09/2025
  
    Consultabilità
  
  
    Non consultabile
  
    Data di rilascio
  
  
    22/09/2028
  
    Riassunto
  
  Artificial intelligence (AI) applications are increasingly deployed in healthcare, especially for clinical decision support systems (CDSS), but with some limitations due to their opacity (the ”black box” problem), to address which explainable AI has been developed.
After some formal definitions of explainability and a summary of its medical, ethical and legal implications a schematic taxonomy of explanation methods has been proposed. According to this scheme a CDSS based on a Convolutional Neural Network (CNN) which scans mammograms to classify breast masses as malignant or benign has been developed along with an explainable framework which can interpret the predictions of the system with Grad-CAM.
After introductory considerations concerning medical imaging and radiography, the Digital Database for Screening Mammography (CBIS-DDSM), publicly available on The Cancer Imaging Archive (TCIA), hosted by the American National Cancer Institute, has been described and the mammograms included in it have been preprocessed with TensorFlow and utilized to train several CNNs built with Keras. After some optimization attempts the most performing model was tested. The Grad-CAM heatmaps generated from the model outputs were superimposed on the test images to highlight the areas involved primarily in each classification. To evaluate the quality of such explanations, the correspondence between the position of the lesions and the more intense portion of the heatmaps was evaluated.
After some formal definitions of explainability and a summary of its medical, ethical and legal implications a schematic taxonomy of explanation methods has been proposed. According to this scheme a CDSS based on a Convolutional Neural Network (CNN) which scans mammograms to classify breast masses as malignant or benign has been developed along with an explainable framework which can interpret the predictions of the system with Grad-CAM.
After introductory considerations concerning medical imaging and radiography, the Digital Database for Screening Mammography (CBIS-DDSM), publicly available on The Cancer Imaging Archive (TCIA), hosted by the American National Cancer Institute, has been described and the mammograms included in it have been preprocessed with TensorFlow and utilized to train several CNNs built with Keras. After some optimization attempts the most performing model was tested. The Grad-CAM heatmaps generated from the model outputs were superimposed on the test images to highlight the areas involved primarily in each classification. To evaluate the quality of such explanations, the correspondence between the position of the lesions and the more intense portion of the heatmaps was evaluated.
    File
  
  | Nome file | Dimensione | 
|---|---|
| La tesi non è consultabile. | |
 
		