Tesi etd-09222022-095447 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
PICCOLI, ELIA
URN
etd-09222022-095447
Titolo
Introducing Unsupervised Skills in Continual Reinforcement Learning Agents
Dipartimento
INFORMATICA
Corso di studi
INFORMATICA
Relatori
relatore Bacciu, Davide
relatore Lomonaco, Vincenzo
relatore Lomonaco, Vincenzo
Parole chiave
- continual learning
- continual reinforcement learning
- progressive networks
- reinforcement learning
- skill-based dqn
- unsupervised skills
Data inizio appello
07/10/2022
Consultabilità
Tesi non consultabile
Riassunto
Reinforcement Learning in recent years has reached astonishing results exploiting huge and complex deep architectures. However, this has come at the cost of unsustainable computational efforts. A common characteristic of all state of art approaches, common in the majority of Machine Learning algorithms, is that the agent's network learns to solve the task "from scratch", that is from a randomized initialization, without reusing previously learned skills or doing it only to a very limited extent. In order to challenge the problem of transfer and re-use, we propose a new approach called Skilled Deep Q-Learning, which leverages pre-trained unsupervised skills as agents' prior knowledge. In the first part of the work, we discuss the implementation of this approach comparing its performance using the Atari suite and investigate how the agent uses these skills. In the second part, we focus on Continual Reinforcement Learning scenarios, trying to extend the proposed approach in a setting where the Reinforcement Learning agent learns more than one game simultaneously. Finally, we present various research paths that can be explored to further develop, understand and improve the proposed approach.
File
Nome file | Dimensione |
---|---|
Tesi non consultabile. |