Tesi etd-09042020-184028 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
ROSSOLINI, GIULIO
URN
etd-09042020-184028
Titolo
Coverage-driven Safety Monitoring of Deep Neural Networks
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
EMBEDDED COMPUTING SYSTEMS
Relatori
relatore Prof. Buttazzo, Giorgio C.
relatore Prof. Biondi, Alessandro
relatore Prof. Biondi, Alessandro
Parole chiave
- adversarial examples
- DL for safety critical systems
- DNN Coverage
- runtime safe DL
- trustworthiness AI
Data inizio appello
25/09/2020
Consultabilità
Tesi non consultabile
Riassunto
In recent years, artificial intelligence (AI) made enormous progress thanks to the evolution of deep neural networks (DNNs), which reached human-level performance in several tasks. However, the behaviors of Deep Learning (DL) methods remain unclear and unpredictable in various situations. One of the most known threats for DNNs are adversarial examples (i.e., particular inputs that cause a model to make a false prediction). To prevent these problems, coverage techniques have been conceived for DNN to drive certification and testing algorithms. Nevertheless, even when reaching a high coverage value, several networks can still exhibit faulty behaviors during operation that were not detected during testing.
This project aims at ensuring safety requirements for DNN during their operational phase by introducing a Coverage-Driven Mechanism to monitor the state of the network at inference time. The proposed tool is an extension of the Caffe framework, which provides a series of versatile mechanisms to speed up the integration and the deployment of novel algorithms. In this regard, three Runtime Safety Algorithms are integrated into the tool and tested. They aim to detect adversarial examples runtime using a new interpretation of Neuron Coverage Criteria for DNN. The experimental results show the capability of the algorithms to detect up to 99.7% adversarial examples on MNIST and 87.2% on CIFAR-10 datasets, using the LeNet and ConvNet networks, respectively.
This project aims at ensuring safety requirements for DNN during their operational phase by introducing a Coverage-Driven Mechanism to monitor the state of the network at inference time. The proposed tool is an extension of the Caffe framework, which provides a series of versatile mechanisms to speed up the integration and the deployment of novel algorithms. In this regard, three Runtime Safety Algorithms are integrated into the tool and tested. They aim to detect adversarial examples runtime using a new interpretation of Neuron Coverage Criteria for DNN. The experimental results show the capability of the algorithms to detect up to 99.7% adversarial examples on MNIST and 87.2% on CIFAR-10 datasets, using the LeNet and ConvNet networks, respectively.
File
Nome file | Dimensione |
---|---|
Tesi non consultabile. |