ETD

Archivio digitale delle tesi discusse presso l'Università di Pisa

Tesi etd-10262016-122400


Tipo di tesi
Tesi di laurea magistrale
Autore
FUMAROLA, ROBERTA
URN
etd-10262016-122400
Titolo
Implementation of techniques for adversarial detection in image classification
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
COMPUTER ENGINEERING
Relatori
relatore Prof. Falchi, Fabrizio
relatore Prof. Caldelli, Roberto
relatore Prof. Amato, Giuseppe
Parole chiave
  • image classification
  • adversarial examples
  • deep neural networks (DNNs)
  • fooling images
Data inizio appello
24/11/2016
Consultabilità
Completa
Riassunto
Deep neural networks (DNNs) have recently led to significant improvement in many areas of machine learning, from speech recognition to computer vision. Recently, it was shown that machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. These adversarial examples are relatively robust and are shared by different neural networks with many number of layers, activations or trained on different subsets of the training data. They could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model.
In this thesis we’ll explore the nature of these adversarial images, we’ll describe the methods that generate fooling examples and the techniques used to make more robust a DNN. In addiction, we'll present our studies about adversarials, which consist in the exploration of the features space of the images according to the euclidean distances, in order to detected them and make a possible solution to help a neural network in classification task.
File