logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-03212024-235644


Tipo di tesi
Tesi di laurea magistrale
Autore
BALDI, TOMMASO
URN
etd-03212024-235644
Titolo
Reliable autoencoders for particle physics experiments on edge devices
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Relatori
relatore Prof. Cimino, Mario Giovanni Cosimo Antonio
correlatore Prof. Donati, Simone
correlatore Dott. Tran, Nhan
Parole chiave
  • embedded systems
  • fpga
  • loss landscape
  • machine learning
  • pruning
  • quantization
  • real-time systems
  • reliable
  • robust
  • tinyml
Data inizio appello
17/04/2024
Consultabilità
Non consultabile
Data di rilascio
17/04/2064
Riassunto
Particle physics experiments, such as the Deep Underground Neutrino Experiment (DUNE) at Fermi National Accelrator Laboratory (US), Atlas and CMS at the CERN Large Hadron Collider (LHC), rely on machine learning (ML) at the edge to process extreme volumes of real-time streaming data. Extreme edge computation often requires robustness to faults, e.g., to function correctly in hostile environments. As such, the computation must be designed with fault tolerance as one of the primary objectives. In order to guarantee such robustness, we want to explore the loss landscapes of neural networks (NNs), looking for quality measure which can help us to understand if the training of the model is done properly and, if necessary, how to improve it. A similar work has been done by Yaoqing Yang's team, underlying which information we can retrieve from both local and global metrics about the quality of the trained model. However, we want to focus our research on ML techniques involved in sub-nuclear physics experiments, where the model is embedded inside edge devices which are constrained in terms of computing, memory space and power. Those kind of models belongs to the field of the TinyML, where NNs require to be optimised with specific methods, such as quantisation, neural architecture search, compression and pruning. Therefore, these models are characterised by several specific hyperparameters, in addition to the most common ones, which need to be tuned properly for the purpose of achieving the previous mentioned robustness. This Thesis delves into the exploration of both local and global metrics to understand which one is more effective to sense the robustness of the model, with the aim of finding practical techniques to improve the reliability and generalization capability of Quantized Neural Networks (QNN).
File