Tesi etd-04292025-182447 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
SATHIK BASHA, MOHAMED ARAFAATH
URN
etd-04292025-182447
Titolo
Interpreting Offline Implicit Q-Learning for Personalized ICU Ventilation via Layer-wise Relevance Propagation and Post-hoc Decision Tree Rule Extraction
Dipartimento
INFORMATICA
Corso di studi
DATA SCIENCE AND BUSINESS INFORMATICS
Relatori
relatore Dott. Guidotti, Riccardo
Parole chiave
- Explainable AI
- Fitted Q-Evaluation
- ICU Ventilator Management
- Implicit Q-Learning
- Layer-wise Relevance Propagation
- MIMIC-IV Dataset
- Post-hoc Decision Tree
- Reinforcement Learning
Data inizio appello
30/05/2025
Consultabilità
Non consultabile
Data di rilascio
30/05/2028
Riassunto
Mechanical ventilation in intensive care units (ICUs) demands patient-specific settings (e.g., PEEP, FiO2) to optimize outcomes and mitigate complications like ventilator-induced lung injury, but opaque reinforcement learning (RL) models impede clinical trust. This thesis presents an explainable AI (XAI) framework for offline Implicit Q-Learning (IQL) using the MIMIC-IV dataset, enhancing transparency in ventilator management while prioritizing safety through offline training. Layer-wise Relevance Propagation (LRP) is applied to IQL actions by backpropagating action values to derive relevance scores, quantifying short-term contributions of state (e.g., blood Partial Thromboplastin Time) and action features (e.g., inspiratory-to-expiratory ratio). Global analyses identify key drivers, while local analyses offer patient-specific insights. LRP on Fitted Q-Evaluation (FQE) Q-values quantifies long-term feature contributions, with combined relevance scores from IQL actions validated against those from FQE for fidelity and consistency. A post-hoc decision tree distills IQL policies into interpretable rules aligned with evidence-based medicine, improving clinical usability. Validated on unseen data and refined through clinician feedback, the framework ensures robustness and ICU alignment. Conducted within the IntelliLung project at the Institute for Applied Informatics, this work fosters trust in AI-driven decisions and offers generalizability to safety-critical clinical domains.
File
Nome file | Dimensione |
---|---|
La tesi non è consultabile. |