logo SBA

ETD

Digital archive of theses discussed at the University of Pisa

 

Thesis etd-04232021-094723


Thesis type
Tesi di dottorato di ricerca
Author
CRECCHI, FRANCESCO
URN
etd-04232021-094723
Thesis title
Deep Learning Safety under Non-Stationarity Assumptions
Academic discipline
INF/01
Course of study
INFORMATICA
Supervisors
tutor Prof. Bacciu, Davide
tutor Dott. Biggio, Battista
Keywords
  • DropIn; adversarial examples; detector
Graduation session start date
26/04/2021
Availability
Full
Summary
Deep Learning (DL) is having a transformational effect in critical areas such as finance, healthcare, transportation, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in DL, may have not scrutinized the potential induced security issues of including such intelligent components in their systems. Building a trustworthy DL system requires enforcing key properties, including robustness, privacy, and accountability. This thesis aims to contribute to enhancing DL model’s robustness to input distribution drifts, i.e. situations where training and test distribution differ. Notably, input distribution drifts may happen both naturally — induced by missing input data, e.g. due to some sensor fault — or adversarially, i.e. by an attacker to induce model behavior as desired. Through this thesis, we firstly provide a technique for making DL models robust to missing inputs by design, inducing resilience even in the case of sequential tasks. Then, we propose a detection framework for adversarial attacks accommodating many techniques in literature and novel proposals, as our newer detector exploiting non-linear dimensionality reduction techniques at its core. Finally, abstracting the analyzed defenses in our framework we identified common drawbacks which we propose to overcome with a fast adversarial examples detection technique, capable of a sensible overhead reduction without sacrificing detectors accuracy both on clean data and under attack.
File