Tesi etd-06042024-161839 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
CARFI', GIACOMO
URN
etd-06042024-161839
Titolo
Adaptively combining skill embeddings for Reinforcement Learning agents
Dipartimento
INFORMATICA
Corso di studi
INFORMATICA
Relatori
relatore Prof. Bacciu, Davide
correlatore Piccoli Elia
correlatore Piccoli Elia
Parole chiave
- foundational models
- life-long learning agents
- machine learning
- reinforcement learning
- state representation
Data inizio appello
12/07/2024
Consultabilità
Completa
Riassunto
Reinforcement Learning (RL) aims to learn agent behavioural policies by maximizing the cumulative reward obtained by interacting with the environment. Standard RL approaches learn an end-to-end mapping from observations to action spaces which define the agent's behavior. On the other hand, Foundational Models learns different representations of the world which can be used by agents to accelerate the learning process. In this thesis, we study how to combine these representations to create an enhanced state representation. Specifically, we propose a technique called Weight Sharing Attention (WSA) which combines embeddings of different Foundational Models, and we empirically assess its performance against alternative combination modules. We tested our approach on different Atari games, and we analyzed the issue of out-of-distribution data and how to mitigate it. We showed that, without fine-tuning of hyperparameters, WSA obtains comparable performance with state-of-the-art methods. This method is effective and could allow life-long learning agents to adapt to different scenarios over time.
File
Nome file | Dimensione |
---|---|
_Master_...gents.pdf | 9.26 Mb |
Contatta l’autore |