Tesi etd-08302022-183557 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
FAZZONE, CHIARA
URN
etd-08302022-183557
Titolo
Probing of Pre-Trained Language Models for Metonymy Classification: a new Dataset and Experiments
Dipartimento
FILOLOGIA, LETTERATURA E LINGUISTICA
Corso di studi
INFORMATICA UMANISTICA
Relatori
relatore Prof. Lenci, Alessandro
Parole chiave
- BERT
- metonymy
- neural language model
- NLP
- pre-trained language models
- probing task
- Transformers
Data inizio appello
26/09/2022
Consultabilità
Tesi non consultabile
Riassunto
Pre-trained language models such as BERT (Bidirectional Encoder Representations from Transformers) have achieved state of the art performances in NLP. The success of such models is due to the contextualized vector representations of language, also known as embeddings, they are able to generate and that have attracted much attention from researchers. In fact, pre-trained language models suffer from low interpretability, meaning that is it difficult to identify the pieces of information encoded in their embeddings, and where and how they are encoded. The aim of this work is to probe the vector representations extracted from BERT to understand whether it captures some information related to one linguistic phenomenon in particular, metonymy.
File
Nome file | Dimensione |
---|---|
Tesi non consultabile. |