Thesis etd-11192025-155226 |
Link copiato negli appunti
Thesis type
Tesi di laurea magistrale
Author
RUGGIERI, ANDREA
URN
etd-11192025-155226
Thesis title
Using Membership Inference Attack as an evaluation metric for Privacy Preserving Machine Unlearning
Department
INGEGNERIA DELL'INFORMAZIONE
Course of study
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Supervisors
relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
relatore Dott. Parola, Marco
relatore Dott. Galatolo, Federico Andrea
relatore Dott. Parola, Marco
relatore Dott. Galatolo, Federico Andrea
Keywords
- machine unlearning
- membership inference attack
- privacy
Graduation session start date
05/12/2025
Availability
Withheld
Release date
05/12/2095
Summary
The growing relevance of Privacy Preserving Machine Learning has made the relationship between User Privacy and Artificial Intelligence an important research area, especially in light of recent legal frameworks such as the General Data Protection Regulation. Ensuring safety-compliant systems requires reliable tools to assess privacy vulnerabilities in models and datasets. This thesis evaluates metrics designed to quantify Privacy Preserving risks in Object Detection Machine Unlearning, the branch of Artificial Intelligence aimed at removing the influence of specific training samples. An important focus is on Membership Inference Attacks (MIA) as an evaluation metric for Machine Unlearning effectiveness. Standard Membership Inference Attack techniques are adapted to Object Detection, and the thesis provides an in-depth study of the Canvas method, an OD-tailored MIA approach that converts model predictions into visual representations. The Generalization Error is also examined, together with its relationship to Overfitting and membership-inference vulnerability. The results show that Basic MIA reliably detects membership-inference vulnerabilities in OD models, while Canvas MIA is strongly model-dependent and the combination of the Generalization Error metrics with MIA-based ones provides a more complete and informative leakage assessment. Overall, the thesis demonstrates that MIA is the most sensitive tool for Privacy Preserving risk evaluation in Machine Unlearning for Object Detection.
File
| Nome file | Dimensione |
|---|---|
The thesis is not available. |
|