logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-09032020-113502


Tipo di tesi
Tesi di laurea magistrale
Autore
GALASSI, ALESSANDRA
URN
etd-09032020-113502
Titolo
Explanation of Cardiovascular Risk Model Estimator
Dipartimento
INFORMATICA
Corso di studi
DATA SCIENCE AND BUSINESS INFORMATICS
Relatori
relatore Prof. Rinzivillo, Salvatore
Parole chiave
  • outcome inspection
  • explanation algorithms
  • LORE
  • explainable artificial intelligence
  • risk assessment
  • health decisions support system
  • risk score
  • GRACE
Data inizio appello
09/10/2020
Consultabilità
Non consultabile
Data di rilascio
09/10/2090
Riassunto
Cardiovascular disease (CVD) remains the leading cause of death worldwide and causes unaffordable social and health costs that tend to increase as the population ages. This could be avoided, if each patient underwent the most adequate treatment. For this to happen, it is important to determine the patient’s risk of having a cardiovascular event: this is known as risk assessment, and can be done using risk scores. To improve stratification of patients with coronary artery disease the risk prediction tool GRACE is considered here. It was built for short term assessment, based on events of death or myocardial infarction (heart attack) on patients with coronary artery disease, and returns a risk score to classify patients in low/high cardiac risk. In this work, at first GRACE model (deployed by members of the Research Centre for Informatics and Systems of the University of Coimbra) is used, and then a new version of GRACE model has been implemented providing a probability estimation of risk, instead of a binary classification (high/low). This thesis focuses on the problem of GRACE's risk model outcome explanation, i.e. of explaining the reasons of the decision taken on a specific case. For this purpose, in addition to SHAP and LIME, well known Python libraries, LORE is also used: it is an agnostic method capable of providing interpretable and faithful explanations that derive from a significant explanation consisting of a decision rule, which explains the reasons of the decision, and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Lastly, two classification models (one of which making use of SMOTE method) were built to better explore the reason for patient classification. A comparison between all the results obtained, through several ways, was made. In recent years, many decision support systems have been constructed as black boxes, that are as systems that hide their internal logic to the user so it is worth investigating in this direction, especially when they are applied to critical domains as health.
File