Tesi etd-11282022-185010 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
LASRI, KARIM
URN
etd-11282022-185010
Titolo
Linguistic Generalization in Transformer-based Neural Language Models
Settore scientifico disciplinare
L-LIN/01
Corso di studi
DISCIPLINE LINGUISTICHE E LETTERATURE STRANIERE
Relatori
tutor Prof. Lenci, Alessandro
tutor Prof. Poibeau, Thierry
commissario Prof. Baroni, Marco
commissario Prof. Lappin, Shalom
commissario Prof.ssa Cassell, Justine
commissario Dott.ssa Ettinger, Allyson
tutor Prof. Poibeau, Thierry
commissario Prof. Baroni, Marco
commissario Prof. Lappin, Shalom
commissario Prof.ssa Cassell, Justine
commissario Dott.ssa Ettinger, Allyson
Parole chiave
- deep learning
- generalization
- linguistic knowledge
- natural language processing
- neural language model
Data inizio appello
01/02/2023
Consultabilità
Completa
Riassunto
Neural language models are commonly deployed to perform diverse natural language processing tasks, as they produce contextual vector representations of texts which can be used in any supervised learning setting. Transformer-based neural architectures have been widely adopted towards this end. After being pre-trained with a generic language modeling objective, they achieve spectacular performance on a wide array of downstream tasks, which in principle require knowledge of sentence structure. As these models are not explicitly supervised with any grammatical instruction, this suggests that linguistic knowledge emerges during pre-training.
The nature of their knowledge is scarcely understood, as these models are generally used as black boxes. This led to the emergence of a growing body of research aimed at uncovering the linguistic abilities of such models. While this literature is very abundant, the epistemic grounds of the existing methodologies are not translatable into each other, underlining the need to formulate more clearly the questions addressing the capture of linguistic knowledge.
Throughout the thesis, we bridge the epistemic gap by formulating explicitly the relations which lie between facets of the broader question. To do so, we adopt three levels of analysis to understand neural language models: the behavioral, algorithmic, implementational levels. Further, we carry a series of experiments to uncover aspects of the linguistic knowledge captured by language models.
The nature of their knowledge is scarcely understood, as these models are generally used as black boxes. This led to the emergence of a growing body of research aimed at uncovering the linguistic abilities of such models. While this literature is very abundant, the epistemic grounds of the existing methodologies are not translatable into each other, underlining the need to formulate more clearly the questions addressing the capture of linguistic knowledge.
Throughout the thesis, we bridge the epistemic gap by formulating explicitly the relations which lie between facets of the broader question. To do so, we adopt three levels of analysis to understand neural language models: the behavioral, algorithmic, implementational levels. Further, we carry a series of experiments to uncover aspects of the linguistic knowledge captured by language models.
File
Nome file | Dimensione |
---|---|
PhD_Manu...Final.pdf | 17.77 Mb |
Contatta l’autore |