Tesi etd-09072021-164550 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
GAGLIARDI, GUIDO
URN
etd-09072021-164550
Titolo
A novel multimodal feature learning architecture for explainable affective computing
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Relatori
relatore Cimino, Mario Giovanni Cosimo Antonio
relatore Vaglini, Gigliola
relatore Alfeo, Antonio Luca
relatore Vaglini, Gigliola
relatore Alfeo, Antonio Luca
Parole chiave
- affective computing
- affective computing
- AI
- artificial intelligence
- autoencoder
- deep features learning
- deep learning
- explainable artificial intelligence
- features learning
- multimodal
- XAI
Data inizio appello
24/09/2021
Consultabilità
Completa
Riassunto
Affective Computing (AC) is a biomedical research field addressing the recognition of emotional states via the analysis of physiological signals.
Most of the AC approaches proposed in recent years employ EEG signals. Indeed, EEG monitoring is affected by the noise due to the electrical activity due to muscles contraction or eye blinking. These sources of noise cannot be simply eliminated because of their relationship with emotions.
To address these issues, this work has implemented a multi-modal affective computing approach employing data coming from both the EEG and the ECG. In this way, the approach can simply eliminate the interference between these two signals.
The usage of data-driven approaches in real-world biomedical applications is limited because domain experts need to validate the reasoning behind an automatic approach while employing its prediction to support a diagnosis.
A possible way to address this problem Is the adoption of explainable artificial intelligence methods, in this work, we proposed a new XAI approach in which we consider an unsupervised clustering of the latent space representations obtained via a multimodal autoencoder and match it with the corresponding class labels to assess the quality of the information distilled via the autoencoder. We iteratively select different subsets of the features via a differential evolution approach to identify the subset of most informative features.
In this work, we use EEG and ECG signals from the well-known MANHOB and DEAP datasets to perform emotion classification. The emotions are labelled in terms of high and low arousal, resulting in a binary classification problem.
Most of the AC approaches proposed in recent years employ EEG signals. Indeed, EEG monitoring is affected by the noise due to the electrical activity due to muscles contraction or eye blinking. These sources of noise cannot be simply eliminated because of their relationship with emotions.
To address these issues, this work has implemented a multi-modal affective computing approach employing data coming from both the EEG and the ECG. In this way, the approach can simply eliminate the interference between these two signals.
The usage of data-driven approaches in real-world biomedical applications is limited because domain experts need to validate the reasoning behind an automatic approach while employing its prediction to support a diagnosis.
A possible way to address this problem Is the adoption of explainable artificial intelligence methods, in this work, we proposed a new XAI approach in which we consider an unsupervised clustering of the latent space representations obtained via a multimodal autoencoder and match it with the corresponding class labels to assess the quality of the information distilled via the autoencoder. We iteratively select different subsets of the features via a differential evolution approach to identify the subset of most informative features.
In this work, we use EEG and ECG signals from the well-known MANHOB and DEAP datasets to perform emotion classification. The emotions are labelled in terms of high and low arousal, resulting in a binary classification problem.
File
Nome file | Dimensione |
---|---|
tesi_consegnata.pdf | 3.00 Mb |
Contatta l’autore |