ETD

Archivio digitale delle tesi discusse presso l'Università di Pisa

Tesi etd-01262019-141307


Tipo di tesi
Tesi di laurea magistrale
Autore
BRAU, FABIO
Indirizzo email
fabio.br.92@hotmail.it
URN
etd-01262019-141307
Titolo
Defenses against Adversarial Examples in Deep Neural Networks
Dipartimento
MATEMATICA
Corso di studi
MATEMATICA
Relatori
relatore Bigi, Giancarlo
Parole chiave
  • robust optimization
  • ReLU networks
  • neural networks
  • deep learning
  • LeakyReLU networks
  • approximation theory
  • adversarial examples
Data inizio appello
15/02/2019
Consultabilità
Completa
Riassunto
In the last 3 decades, the scientific community has improved the research over Neural Networks, reevaluating their power in many theoretical and practical areas. Defined for the first time by Frank Rosenblatt in 1958 giving the definition of perceptron (an entity composed by an input an output and weights that can be modified with error back propagation method), they were an attempt of modelling human cerebrum that, according with the neuroscience theories of the time, was thought as a huge net of links between atomics entities called neurons. This first enthusiasm collapse when Marvin Minsky and Symour Papert shown that perceptron cannot implement the function Xor.
Between 1985 and 1995, Yann le Cun used a modified version of the perceptron, discovered in the 1972 with the name of multi layer perceptron by the Phd Student Paul Werbos, to solve the classification problem of handwritten digits. In the same years the mathematical community made a great discover: neural network are able to approximate continuous functions. Hornik showed that NNs are universal approximators in the sense that they form a dense set of continuous function with compact domain. Moreover he extended the density result over each probability measure space. Some year later, in 1992, Leshno made another important step in approximation theory giving a characterization of all the shallow networks (network with one single hidden layer) that are able to approximate any continuous function.
Some years later, computer science community apply neural network to solve a huge amount of problems: speech recognition, image classification, email spam recognition etc. They discovered that deep neural network (with more than one hidden layer) are more powerful than shallow ones. This result is supported by a recent paper of Yarotsky that in 2017 gave error bounds for these particular network showing that they are able to approximate more efficiently (using less computation units and so less parameters) all the almost everywhere differentiable functions. Other mathematicians approach the problem in geometrical sense: Italians mathematician Monica Bianchini and Franco Scarselli in 2014 describe the complexity of network using topological instruments as Betti numbers.
In 2013-2014, Szegedy and Goodfellow, working on deep network applied to classification of images, made an intriguing discover: Classifiers are vulnerable to adversarial examples. They must be considered as mischievously computed images that with a good lack of probability are wrongly classified from the network, even if they seem absolutely common for human eye. This discover opens a new kind of research, that can be dived in two categories: The first searches a way to test if a given network is sensible to these attacks; the second searches a way to build robust ``uncrackable'' networks. Our work focuses on these problems, following the Wong's paper that produces a test (incomplete, i.e affected by false-positive results) to verify the robustness of a given network.

In the first chapter we present the basilar concepts of the Neural Network. In the first section we define the most simple kind of network: the sequential fully connected network. They are characterized by the presence of layers that can be thought as a composition of an affine function and an activation function. In the second section we define a more complex structure called convolutional layer that admit to build networks which take in input tensors instead of vector without increment the number of computation units. In the third section, defining a little more general networks which do not satisfy the sequentiality property (admit connections between not adjacent layers), we provide a result which shows that, under certain hypothesis on the activation function, a Network with skip can be written as a fully connected network. In the fourth section, we expose Leshno density theorem that allows to characterize all the activation function which induct networks dense in the class of continuous functions. In the fifth section we expose Yarotsky theorems giving conditions over the complexity of the network (defined by the number of layers, connections, neurons) to show that deep networks can approximate differentiable functions with more accuracy than the shallow networks. In conclusion, in the last section, we expose the methods which admit to training a network as classifier provided of a set of training examples.

In the second chapter we treat the problem of robustness, it is the heart of the work. In the first section we introduce adversarial examples, showing how to generate them using the classical database of handwritten digits: MNIST. In the second section we introduce on the robust algorithms problems. In the third section we describe the method to approach the problem of testing the robustness building a set called adversarial polytope. We show first that it can be solveed exactly by a mixed NP-hard program (not feasible in practice) and after we compute a relaxed version of the polytope that it is useful in the next section. In the fourth section we expose the method that Wong uses in his article to build a test computing the convex outer bound via dual problem and obtaining a dual network. In the last section we show that the test itself can be used to produce a robust training algorithm inducting a loss function.

In the third chapter we show that test via dual network, and consequently the robust training algorithm, used for ReLU networks can be applied with some modification to deep network with a similar activation function called LeakyReLU that depends on a
parameter called slope. This kind of activation function do not produce a better accuracy in terms of classifications if the slope is fixed, better results can be obtained if this parameter is adaptively learned.

In the last chapter we provide some experimental results. We focus on the testing of the robustness in two cases: in the first we consider a small network to induct a binary classifier of a finite random set in the two dimensional plane; in the second we consider a biggest network to induct a classifier of handwritten digits taken from MNIST database.
File