logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-03242021-214055


Tipo di tesi
Tesi di laurea magistrale
Autore
DI PALMA, ELIANA
URN
etd-03242021-214055
Titolo
"Love is an open door but not a table". Come uomini e macchine 'comprendono' le metafore lessicalizzate e creative.
Dipartimento
FILOLOGIA, LETTERATURA E LINGUISTICA
Corso di studi
LINGUISTICA E TRADUZIONE
Relatori
relatore Lenci, Alessandro
controrelatore Bertuccelli Papi, Marcella
Parole chiave
  • metaphors
  • distributional semantic
  • language model
  • conventional metaphors
  • novel metaphors
  • semantica distribuzionale
  • metafore
  • modelli computazionali
  • plausibilià semantica
  • metafore creative
  • metafore convenzionali
Data inizio appello
26/04/2021
Consultabilità
Tesi non consultabile
Riassunto
Metaphor is a widespread linguistic and cognitive phenomenon. Many studies were carried out to investigate how humans understand and produce metaphors. A key aspect of the phenomenon is the difference between frozen and creative metaphors. It was shown that humans interpretate in a different way conventional metaphors and novel metaphors and that humans are sensible to this difference. Our study confirms that result. Furthermore, metaphors, especially creative ones, are difficult to model computationally. Recent progress has been made in metaphor identification, also thanks to the contextualized embeddings from models like BERT. To test what BERT, RoBERTa and GPT2 know about metaphors, we challenge them with a new dataset of conventional and creative metaphors accompanied by various types of human judgments. We find that the models can "recognize" metaphors and shows interesting abilities like that of predicting creative metaphors. At the same time, we show that the models still struggle in "interpreting" metaphorical language, even if it outperforms traditional static vectors. Our findings confirm previous claims about the abilities and limitations of the models. Furthermore, they show that RoBERTa has better performance than BERT and GPT2 in the first experiment and that BERT large is almost good in "interpreting" metaphors in the upper-intermediate layers as suggested by the results of the second experiment. Finally, the models seem to show some similarities with humans, but still miss a significant part of human intuitions about the meaning of metaphors.
File