Tesi etd-06192025-192006 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
CAPECCHI, GIULIO
URN
etd-06192025-192006
Titolo
One Model to Embed Them All: Efficient Adaptation of Dense Retrievers to Multiple Domains
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
ARTIFICIAL INTELLIGENCE AND DATA ENGINEERING
Relatori
relatore Tonellotto, Nicola
relatore Mallia, Antonio
relatore Pezzuti, Francesca
relatore Mallia, Antonio
relatore Pezzuti, Francesca
Parole chiave
- fine-tuning
- Information Retrieval
- LoRA
Data inizio appello
23/07/2025
Consultabilità
Non consultabile
Data di rilascio
23/07/2028
Riassunto
Low-Rank Adaptation (LoRA) is an efficient fine-tuning technique that reduces the number of trainable parameters while maintaining performance close to full fine-tuning. In this thesis, we explore its use for personalized dense retrieval. We propose a framework where a shared base model is adapted to multiple clients or domains through separate LoRA modules, each trained on realm-specific data. This enables personalization without the need to maintain separate fully fine-tuned models for each client, significantly reducing the required computational needs for this scenario. Results show that this approach achieves performance comparable to standard fine-tuning, making it suitable for large-scale production environments.
File
Nome file | Dimensione |
---|---|
La tesi non è consultabile. |