Sistema ETD

Archivio digitale delle tesi discusse presso l'Università di Pisa


Tesi etd-06202018-190038

Tipo di tesi
Tesi di laurea magistrale
Low cost eye-in-hand robotic-arm featuring AI-based human machine interface for people with disabilities
Corso di studi
relatore Prof. Fanucci, Luca
correlatore Ing. Meoni, Gabriele
Parole chiave
  • robotic arm
  • low cost
  • machine learning
  • computer vision
  • hmi
  • assistive technologies
  • artificial intelligence
  • AI
  • YOLO
Data inizio appello
Data di rilascio
Riassunto analitico
During the last years, modern electronic power wheelchairs have been equipped by manipulators to compensate the deficit in the manual skills of the users due to accidents or disabling diseases. Such robotic arms are designed to perform simple operations, like knocking on a door or pressing buttons in a lift panel, turning on the light in a room, etc.
Owing to the benefits introduced by such manipulators in terms of mobility and autonomy increment, the research has been focusing on the realization of robotic arms able to perform very complex tasks like interaction with small objects.
However, for highly impaired users exploiting all the potentialities offered by such manipulators is often hard, especially because the robot arm is often controlled by the same joystick controller of the power-wheelchair. For such reason, a high number of different Human Machine Interfaces (HMIs) have been developed for users with different impairments. For example in case of severe upper-link disabilities, Brain Computer Interfaces (BCI) can be used which is very promising thanks to its usability by the users having no possibility of moving their arms. However, such HMI requires the presence of numerous electrodes placed on the user's body or dedicated helmets resulting more invasive than other interfaces. For such reason, for less severe disabilities different HMIs are more recommended.
In this thesis, we propose a user independent low cost robotic arm with 5-Degrees of Freedom(DOFs) equipped by a monocular camera and a proximity sensor, controlled by and augmented HMI featuring a real-time AI algorithm which improves the users experience. The offered HMI is based on a touch-screen in which the user can visualize the camera frames and press the desired object. Starting from the user selections, the AI extracts some image features which are elaborated by a tracking algorithm. The autonomy of the robotic arm is reached by a closed-feedback loop which exploits image features extracted by the camera images, to actuate the motors of the manipulator. The main advantages of this work is due to particular Computer Vision algorithm that uses an Artificial Intelligence (AI) to perform objects recognition and their coordinates extraction in the image frames. In fact, thanks to the presence of the AI, objects, like the buttons of a lift panel, are autonomously recognized and shown to the users surrounded by a bounding box. This allows the implementation of a robust HMIs activated by the click of the user on a touchscreen. The robustness of the HMIs is increased since it accepts clicks only inside the shown bounding boxes, avoiding to start the approach in case of user errors. For more severe impairments, a Manual raster scanning and a Time raster scanning HMIs have been implemented.
The software of the system is modular, versatile and easy to maintain thanks to its model based organization. It has been implemented inside the Robotic Operative System (ROS) environment. ROS is an open-source and multi-platform framework that manages multiple tasks as nodes of a network that exchange messages in form of topics or services. Each node is a single elaboration unit or module, usually represented by a process, in which the messages are received and sent, according to the previous and next module. In this way, a future reconfiguration of the software can be carried out without changing the entire software organization. It also permits to test new single nodes inside the existent ones, providing the message are maintained.
The entire software, with the only exception of the AI and the tracking algorithm, runs on a Raspberry PI 3 platform.