logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-01312025-165653


Tipo di tesi
Tesi di dottorato di ricerca
Autore
SAVERI, GAIA
URN
etd-01312025-165653
Titolo
Neuro-Symbolic Methods for Time Series Data: Continuous Representations and Learning with Signal Temporal Logic
Settore scientifico disciplinare
INF/01 - INFORMATICA
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Prof. Bortolussi, Luca
correlatore Prof.ssa Nenzi, Laura
Parole chiave
  • explainable artificial intelligence
  • neuro-symbolic learning
  • temporal logic
Data inizio appello
19/02/2025
Consultabilità
Completa
Riassunto
The need for integrating Artificial Intelligence (AI) and symbolic knowledge has been claimed for a long time. In this context, Neuro-Symbolic AI (NeSy) is emerging as a paradigm for the principled integration of sub-symbolic connectionist systems and logic knowledge. However, a remarkable gap burdens on the integration of ML algorithms and symbolic representations: the latter are discrete objects, while ML models are mostly based on gradient-based optimization in continuous spaces. Hence, using continuous optimization to learn and exploit logic requirements is a challenging problem: a possible solution is to embed formulae in a continuous space in a meaningful way, i.e. preserving the semantics. This thesis presents a range of techniques for defining, characterizing and deploying continuous embeddings of logical formulae, with a focus on Signal Temporal Logic (STL). Starting from a kernel function measuring similarity between STL formulae, we propose a constructive algorithm for computing interpretable finite-dimensional explicit embeddings of STL formulae. We demonstrate their predictive power and we also provide explanations for a vast amount of information retained by the embeddings. Then, we propose an algorithm for mining STL requirements from time-series data, based on the interaction between Bayesian Optimization and Information Retrieval techniques, that works by searching from the discriminating STL property directly in a continuous space representing its semantics. We also propose an interpretable-by-design concept based model for time series classification and we develop an autoencoder architecture based on Graph Neural Networks to construct invertible embeddings of formulae.
File