logo SBA

ETD

Digital archive of theses discussed at the University of Pisa

 

Thesis etd-11282022-185010


Thesis type
Tesi di dottorato di ricerca
Author
LASRI, KARIM
URN
etd-11282022-185010
Thesis title
Linguistic Generalization in Transformer-based Neural Language Models
Academic discipline
L-LIN/01
Course of study
DISCIPLINE LINGUISTICHE E LETTERATURE STRANIERE
Supervisors
tutor Prof. Lenci, Alessandro
tutor Prof. Poibeau, Thierry
commissario Prof. Baroni, Marco
commissario Prof. Lappin, Shalom
commissario Prof.ssa Cassell, Justine
commissario Dott.ssa Ettinger, Allyson
Keywords
  • deep learning
  • generalization
  • linguistic knowledge
  • natural language processing
  • neural language model
Graduation session start date
01/02/2023
Availability
Full
Summary
Neural language models are commonly deployed to perform diverse natural language processing tasks, as they produce contextual vector representations of texts which can be used in any supervised learning setting. Transformer-based neural architectures have been widely adopted towards this end. After being pre-trained with a generic language modeling objective, they achieve spectacular performance on a wide array of downstream tasks, which in principle require knowledge of sentence structure. As these models are not explicitly supervised with any grammatical instruction, this suggests that linguistic knowledge emerges during pre-training.

The nature of their knowledge is scarcely understood, as these models are generally used as black boxes. This led to the emergence of a growing body of research aimed at uncovering the linguistic abilities of such models. While this literature is very abundant, the epistemic grounds of the existing methodologies are not translatable into each other, underlining the need to formulate more clearly the questions addressing the capture of linguistic knowledge.
Throughout the thesis, we bridge the epistemic gap by formulating explicitly the relations which lie between facets of the broader question. To do so, we adopt three levels of analysis to understand neural language models: the behavioral, algorithmic, implementational levels. Further, we carry a series of experiments to uncover aspects of the linguistic knowledge captured by language models.
File