Sistema ETD

Archivio digitale delle tesi discusse presso l'Università di Pisa

 

Tesi etd-07042017-140650


Tipo di tesi
Tesi di laurea magistrale
Autore
ARAPI, VISAR
Indirizzo email
visararapi22@gmail.com
URN
etd-07042017-140650
Titolo
Unsupervised and supervised methods for automatic human hand pose recognition for robotics and human robot interaction
Struttura
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
INGEGNERIA ROBOTICA E DELL'AUTOMAZIONE
Commissione
relatore Prof. Bicchi, Antonio
relatore Prof. Bianchi, Matteo
relatore Prof. Bacciu, Davide
relatore Ing. Della Santina, Cosimo
relatore Ing. Battaglia, Edoardo
Parole chiave
  • Extended Kalman Filter
  • Machine Learning
  • Deep Learning
  • Human hand tracking
  • Convolutional and Recurrent neural networks
  • Optimization
Data inizio appello
20/07/2017;
Consultabilità
parziale
Data di rilascio
20/07/2020
Riassunto analitico
Hand Pose Recognition (HPR) play an important role in human-computer interactions (HCI) and human-robot interactions (HRI). However, since the hand pose has high degrees of freedom (DoF) for joints, and poses are always flexible, hand pose estimation with high precision is still a challenge problem. In this work, a two-stage HPR system is proposed.
In the first stage, I implement a Hand Pose Reconstruction algorithm, and a non-vision based unsupervised HPR method.
I describe the general procedure, model construction, and experimental results of tracking hand kinematics using extended Kalman filter (EKF) based on data recorded from active surface markers. I used a hand model with 26 DoF that consists of hand global posture, and digits. The reconstructions obtained from four different subjects were used to implement an unsupervised method to recognize hand actions during grasping and manipulation tasks, showing a high degree of accuracy.
In the second stage, I implement a vision-based supervised HPR method. Deep Neural Networks (DNNs) is applied to automatically learn features from hand posture images. Images consist of frames extracted from grasping and manipulation task videos. Such videos are divided into intervals and at each interval is associated a specific action by a supervisor. Experiments verify that the proposed system achieves a recognition accuracy as high as 85.12%.
File