Tesi etd-01232025-215943 |
Link copiato negli appunti
Tipo di tesi
Tesi di laurea magistrale
Autore
NOCENTINI, FRANCESCA
URN
etd-01232025-215943
Titolo
Deep Reinforcement Learning based Control of a Omniwheeled Mobile Manipulator for pick and place operations
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
INGEGNERIA ROBOTICA E DELL'AUTOMAZIONE
Relatori
relatore Garabini, Manolo
Parole chiave
- Deep Reinforcement Learning
- domain randomization
- Mobile Manipulation
- pick and place
Data inizio appello
18/02/2025
Consultabilità
Non consultabile
Data di rilascio
18/02/2095
Riassunto
Mobile manipulation covers a lot of applications in robotics. It is usually challenging due to the coordination of a mobile base and a manipulator. Mobile manipulation in real-world environments is a long-horizon task and it combines navigation and control in a high-dimensional action space while respecting constraints of the task and the environment. Most of the existing high-redundancy mobile manipulation systems uses task-specific control algorithms, not adequate for unstructured and dynamic environments.
Recently, learning-based methods such as as Deep Reinforcement Learning (DRL) showed promising results in robotics.
The goal of DRL in robotics is to enable the robot (agent) to realize tasks and to handle changes in the environment without reprogramming again.
This work developed a DRL-based control method to perform pick-and-place tasks tested on a manipulator mounted on an omniwheeled base. Specifically, the robot must move a generic object placed on a table from an initial position to a randomly generated target point, within a certain period of time. The goal is to make the RL policy, and thus the control, robust to the target position and environment noise. No specific pick-and-place movement is enforced, but the robot learns through trial-and-error to throw the cube if the goal is far in depth on the table, to use the base and gripper if the goal is far in length on the table, and simply to push the object if the goal is close to the initial position of the object. The thesis presents results achieved, emphasizing the policy performance as a result of reward shaping and domain randomization changes.
Recently, learning-based methods such as as Deep Reinforcement Learning (DRL) showed promising results in robotics.
The goal of DRL in robotics is to enable the robot (agent) to realize tasks and to handle changes in the environment without reprogramming again.
This work developed a DRL-based control method to perform pick-and-place tasks tested on a manipulator mounted on an omniwheeled base. Specifically, the robot must move a generic object placed on a table from an initial position to a randomly generated target point, within a certain period of time. The goal is to make the RL policy, and thus the control, robust to the target position and environment noise. No specific pick-and-place movement is enforced, but the robot learns through trial-and-error to throw the cube if the goal is far in depth on the table, to use the base and gripper if the goal is far in length on the table, and simply to push the object if the goal is close to the initial position of the object. The thesis presents results achieved, emphasizing the policy performance as a result of reward shaping and domain randomization changes.
File
Nome file | Dimensione |
---|---|
Tesi non consultabile. |