Tesi etd-02132023-170912 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
IADAROLA, GIACOMO
URN
etd-02132023-170912
Titolo
Representation and Detection of Malware Families using Explainable Approaches
Settore scientifico disciplinare
INF/01
Corso di studi
INFORMATICA
Relatori
tutor Dott. Martinelli, Fabio
supervisore Prof. Micheli, Alessio
supervisore Dott. Mercaldo, Francesco
supervisore Prof. Micheli, Alessio
supervisore Dott. Mercaldo, Francesco
Parole chiave
- cybersecurity
- deep learning
- explainable AI
- malware analysis
- model checking
Data inizio appello
21/02/2023
Consultabilità
Completa
Riassunto
Malware analysis and detection is a long-standing research topic in the cybersecurity field. In the last decade, the massive quantity of available data has pushed researchers to move toward data-driven approaches, also due to the inability of classical methods (such as signature-based techniques) to scale with such a quantity of data. Nevertheless, such methods have vulnerabilities and weaknesses which deserve a deep investigation. One of the most significant controversies regards the "explainability" of such methodologies.
This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.
This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.
File
Nome file | Dimensione |
---|---|
Iadarola...hesis.pdf | 16.94 Mb |
Contatta l’autore |