Tesi etd-11192025-155226 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
RUGGIERI, ANDREA
URN
etd-11192025-155226
Titolo
Using Membership Inference Attack as an evaluation metric for Privacy Preserving Machine Unlearning
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Relatori
relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
relatore Dott. Parola, Marco
relatore Dott. Galatolo, Federico Andrea
relatore Dott. Parola, Marco
relatore Dott. Galatolo, Federico Andrea
Parole chiave
- machine unlearning
- membership inference attack
- privacy
Data inizio appello
05/12/2025
Consultabilità
Non consultabile
Data di rilascio
05/12/2095
Riassunto
The growing relevance of Privacy Preserving Machine Learning has made the relationship between User Privacy and Artificial Intelligence an important research area, especially in light of recent legal frameworks such as the General Data Protection Regulation. Ensuring safety-compliant systems requires reliable tools to assess privacy vulnerabilities in models and datasets. This thesis evaluates metrics designed to quantify Privacy Preserving risks in Object Detection Machine Unlearning, the branch of Artificial Intelligence aimed at removing the influence of specific training samples. An important focus is on Membership Inference Attacks (MIA) as an evaluation metric for Machine Unlearning effectiveness. Standard Membership Inference Attack techniques are adapted to Object Detection, and the thesis provides an in-depth study of the Canvas method, an OD-tailored MIA approach that converts model predictions into visual representations. The Generalization Error is also examined, together with its relationship to Overfitting and membership-inference vulnerability. The results show that Basic MIA reliably detects membership-inference vulnerabilities in OD models, while Canvas MIA is strongly model-dependent and the combination of the Generalization Error metrics with MIA-based ones provides a more complete and informative leakage assessment. Overall, the thesis demonstrates that MIA is the most sensitive tool for Privacy Preserving risk evaluation in Machine Unlearning for Object Detection.
File
| Nome file | Dimensione |
|---|---|
La tesi non è consultabile. |
|