Thesis etd-09042020-184028 |
Link copiato negli appunti
Thesis type
Tesi di laurea magistrale
Author
ROSSOLINI, GIULIO
URN
etd-09042020-184028
Thesis title
Coverage-driven Safety Monitoring of Deep Neural Networks
Department
INGEGNERIA DELL'INFORMAZIONE
Course of study
EMBEDDED COMPUTING SYSTEMS
Supervisors
relatore Prof. Buttazzo, Giorgio C.
relatore Prof. Biondi, Alessandro
relatore Prof. Biondi, Alessandro
Keywords
- DL for safety critical systems
- trustworthiness AI
- DNN Coverage
- adversarial examples
- runtime safe DL
Graduation session start date
25/09/2020
Availability
None
Summary
In recent years, artificial intelligence (AI) made enormous progress thanks to the evolution of deep neural networks (DNNs), which reached human-level performance in several tasks. However, the behaviors of Deep Learning (DL) methods remain unclear and unpredictable in various situations. One of the most known threats for DNNs are adversarial examples (i.e., particular inputs that cause a model to make a false prediction). To prevent these problems, coverage techniques have been conceived for DNN to drive certification and testing algorithms. Nevertheless, even when reaching a high coverage value, several networks can still exhibit faulty behaviors during operation that were not detected during testing.
This project aims at ensuring safety requirements for DNN during their operational phase by introducing a Coverage-Driven Mechanism to monitor the state of the network at inference time. The proposed tool is an extension of the Caffe framework, which provides a series of versatile mechanisms to speed up the integration and the deployment of novel algorithms. In this regard, three Runtime Safety Algorithms are integrated into the tool and tested. They aim to detect adversarial examples runtime using a new interpretation of Neuron Coverage Criteria for DNN. The experimental results show the capability of the algorithms to detect up to 99.7% adversarial examples on MNIST and 87.2% on CIFAR-10 datasets, using the LeNet and ConvNet networks, respectively.
This project aims at ensuring safety requirements for DNN during their operational phase by introducing a Coverage-Driven Mechanism to monitor the state of the network at inference time. The proposed tool is an extension of the Caffe framework, which provides a series of versatile mechanisms to speed up the integration and the deployment of novel algorithms. In this regard, three Runtime Safety Algorithms are integrated into the tool and tested. They aim to detect adversarial examples runtime using a new interpretation of Neuron Coverage Criteria for DNN. The experimental results show the capability of the algorithms to detect up to 99.7% adversarial examples on MNIST and 87.2% on CIFAR-10 datasets, using the LeNet and ConvNet networks, respectively.
File
Nome file | Dimensione |
---|---|
There are some hidden files because of the review of the procedures of theses' publication. |