Tesi etd-11192025-153314 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
NICCOLAI, JACOPO
URN
etd-11192025-153314
Titolo
A machine unlearning approach based on multi-objective loss function to forget samples in object detection
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Relatori
relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
relatore Dott. Parola, Marco
relatore Dott. Parola, Marco
Parole chiave
- machine unlearning
- multi-objective
- selective forgetting
- user privacy
- yolo
Data inizio appello
05/12/2025
Consultabilità
Non consultabile
Data di rilascio
05/12/2095
Riassunto
Machine unlearning has become increasingly important for addressing privacy con-
cerns in AI, particularly in scenarios requiring the selective removal of specific data
from trained models. Traditional unlearning approaches typically focus on a single
aspect: fine-tuning can maintain model performance but may not fully eliminate the
forgotten information; gradient-based methods aggressively remove targeted knowl-
edge but risk degrading performance on the remaining data; and sparsity-based
penalties influence both goals indirectly and with limited control.
This work introduces a multi-objective unlearning loss function that integrates these
complementary mechanisms within a unified framework. The method dynamically
weights the contributions of fine-tuning on the retain set, gradient ascent on the for-
get set, and L1 sparsity regularization, jointly optimizing forgetting, utility preser-
vation, and overall model stability.
A complete training pipeline based on YOLOv8 is implemented, with experiments
conducted on Pascal VOC, KITTI Vision 2D, and the Construction Safety dataset.
Exact unlearning procedures are used to produce golden models as ground-truth
references, while approximate methods are evaluated for selective forgetting. Re-
sults show that the multi-objective strategy achieves a balanced trade-off between
utility and privacy, maintaining competitive performance with reduced variance and
improved stability compared to single-objective approaches.
cerns in AI, particularly in scenarios requiring the selective removal of specific data
from trained models. Traditional unlearning approaches typically focus on a single
aspect: fine-tuning can maintain model performance but may not fully eliminate the
forgotten information; gradient-based methods aggressively remove targeted knowl-
edge but risk degrading performance on the remaining data; and sparsity-based
penalties influence both goals indirectly and with limited control.
This work introduces a multi-objective unlearning loss function that integrates these
complementary mechanisms within a unified framework. The method dynamically
weights the contributions of fine-tuning on the retain set, gradient ascent on the for-
get set, and L1 sparsity regularization, jointly optimizing forgetting, utility preser-
vation, and overall model stability.
A complete training pipeline based on YOLOv8 is implemented, with experiments
conducted on Pascal VOC, KITTI Vision 2D, and the Construction Safety dataset.
Exact unlearning procedures are used to produce golden models as ground-truth
references, while approximate methods are evaluated for selective forgetting. Re-
sults show that the multi-objective strategy achieves a balanced trade-off between
utility and privacy, maintaining competitive performance with reduced variance and
improved stability compared to single-objective approaches.
File
| Nome file | Dimensione |
|---|---|
La tesi non è consultabile. |
|