logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-04232021-094723


Tipo di tesi
Tesi di dottorato di ricerca
Autore
CRECCHI, FRANCESCO
URN
etd-04232021-094723
Titolo
Deep Learning Safety under Non-Stationarity Assumptions
Settore scientifico disciplinare
INF/01
Corso di studi
INFORMATICA
Relatori
tutor Prof. Bacciu, Davide
tutor Dott. Biggio, Battista
Parole chiave
  • DropIn; adversarial examples; detector
Data inizio appello
26/04/2021
Consultabilità
Completa
Riassunto
Deep Learning (DL) is having a transformational effect in critical areas such as finance, healthcare, transportation, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in DL, may have not scrutinized the potential induced security issues of including such intelligent components in their systems. Building a trustworthy DL system requires enforcing key properties, including robustness, privacy, and accountability. This thesis aims to contribute to enhancing DL model’s robustness to input distribution drifts, i.e. situations where training and test distribution differ. Notably, input distribution drifts may happen both naturally — induced by missing input data, e.g. due to some sensor fault — or adversarially, i.e. by an attacker to induce model behavior as desired. Through this thesis, we firstly provide a technique for making DL models robust to missing inputs by design, inducing resilience even in the case of sequential tasks. Then, we propose a detection framework for adversarial attacks accommodating many techniques in literature and novel proposals, as our newer detector exploiting non-linear dimensionality reduction techniques at its core. Finally, abstracting the analyzed defenses in our framework we identified common drawbacks which we propose to overcome with a fast adversarial examples detection technique, capable of a sensible overhead reduction without sacrificing detectors accuracy both on clean data and under attack.
File