Tesi etd-03272024-093923 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
BERTONCINI, GIOELE
URN
etd-03272024-093923
Titolo
Benchmarking automatic prompt learning methods in the Italian language
Dipartimento
INFORMATICA
Corso di studi
INFORMATICA
Relatori
relatore Prof.ssa Passaro, Lucia C.
relatore Prof. Bacciu, Davide
controrelatore Soldani, Jacopo
relatore Prof. Bacciu, Davide
controrelatore Soldani, Jacopo
Parole chiave
- NLP
- P-tuning
- Prefix tuning
- prompting
Data inizio appello
12/04/2024
Consultabilità
Completa
Riassunto
In recent years the NLP field has witnessed two major changes, first the development of large language models that give birth to the pre-training, fine-tuning paradigm, and then the diffusion of prompting methods that allow using Language Models (LM) without updating their large number of parameters. In prompting, the model parameters are left unchanged and only an instruction in natural language, called prompt, is optimized. The main criticality of this method is choosing the right prompt to extract knowledge from an LM, to address this issue, several algorithms have been proposed that automatically search for the best-performing prompt. Moreover, some algorithms directly search the embedding space of an LM using "soft prompts" composed of virtual tokens that do not correspond to a natural language word.
This work explores the potential of soft prompting on tasks in the Italian language. In particular, two popular algorithms, namely P-tuning and Prefix tuning are applied to 10 different classification tasks selected from the EVALITA 2023 evaluation campaign. Experimental results using these prompt techniques in combination with two LMs pre-trained in Italian (BERTino and IT-5), show how the use of soft prompts is beneficial also to solve tasks in non-Enlgish languages like Italian, and how soft prompting allows to train models that require little or no task-specific tuning. In particular, Prefix tuning combined with IT-5 can achieve good performances without any hyperparameter optimization, and also in low data settings.
This work explores the potential of soft prompting on tasks in the Italian language. In particular, two popular algorithms, namely P-tuning and Prefix tuning are applied to 10 different classification tasks selected from the EVALITA 2023 evaluation campaign. Experimental results using these prompt techniques in combination with two LMs pre-trained in Italian (BERTino and IT-5), show how the use of soft prompts is beneficial also to solve tasks in non-Enlgish languages like Italian, and how soft prompting allows to train models that require little or no task-specific tuning. In particular, Prefix tuning combined with IT-5 can achieve good performances without any hyperparameter optimization, and also in low data settings.
File
Nome file | Dimensione |
---|---|
Tesi_completa.pdf | 1.36 Mb |
Contatta l’autore |