Thesis etd-02132023-170912 |
Link copiato negli appunti
Thesis type
Tesi di dottorato di ricerca
Author
IADAROLA, GIACOMO
URN
etd-02132023-170912
Thesis title
Representation and Detection of Malware Families using Explainable Approaches
Academic discipline
INF/01
Course of study
INFORMATICA
Supervisors
tutor Dott. Martinelli, Fabio
supervisore Prof. Micheli, Alessio
supervisore Dott. Mercaldo, Francesco
supervisore Prof. Micheli, Alessio
supervisore Dott. Mercaldo, Francesco
Keywords
- cybersecurity
- deep learning
- explainable AI
- malware analysis
- model checking
Graduation session start date
21/02/2023
Availability
Full
Summary
Malware analysis and detection is a long-standing research topic in the cybersecurity field. In the last decade, the massive quantity of available data has pushed researchers to move toward data-driven approaches, also due to the inability of classical methods (such as signature-based techniques) to scale with such a quantity of data. Nevertheless, such methods have vulnerabilities and weaknesses which deserve a deep investigation. One of the most significant controversies regards the "explainability" of such methodologies.
This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.
This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.
File
Nome file | Dimensione |
---|---|
Iadarola...hesis.pdf | 16.94 Mb |
Contatta l’autore |