logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-06112019-214034


Tipo di tesi
Tesi di laurea magistrale
Autore
CROSATO, LUCA
Indirizzo email
lucacrosato7@gmail.com
URN
etd-06112019-214034
Titolo
Deep Learning assisted closed chain inverse dynamics for Biomechanical analysis during object manipulation
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
INGEGNERIA ROBOTICA E DELL'AUTOMAZIONE
Relatori
relatore Ing. Filippeschi, Alessandro
correlatore Avizzano, Carlo Alberto
Parole chiave
  • Sensor Networks
  • Object Detection
  • Computer Vision for Automation
  • simulation
Data inizio appello
18/07/2019
Consultabilità
Non consultabile
Data di rilascio
18/07/2089
Riassunto
To design efficient and safe systems capable of interacting with people, many researchers and engineers are interested in the development of accurate human-machine interface models that could serve the better design of such systems. These models often include humans. Therefore it is important to have suitable tools to perform a biomechanical analysis of the human body so that the ergonomic risk associated to a task can be classified. Such an analysis can be tailored to specific applications that range from the human-robot interaction to the assessment of the ergonomic risk served by technologies and methodologies of robotics.
Recent advances in wearable technologies allow for accurate and fast motion tracking. This is an enabling technology that makes it possible to obtain a biomechanical analysis of the human based on an inverse dynamics approach. The remaining critical problems are the determination of external loads and the assessment of the kinematic loops, which add internal loads to be considered in the analysis.
This thesis proposes a method to analyze the biomechanics of a human based on the information gathered from wearable sensors to estimate both the kinematic variables and the external loads needed to solve the inverse dynamics of the human. In this thesis, the musculoskeletal system is considered as a tree in which bones are considered as rigid bodies, and the action of muscles is summarized by wrenches applied at the spherical joints which connect bodies.
Based on the data coming from a network of wearable inertial sensors that capture the human motion and the video stream of an egocentric camera, a procedure for computing the inverse dynamics of the human body was developed. This procedure takes into account four ground support cases, i.e. single, double and no support, and four typical load conditions, under the assumption that the applied external load is due to an object carried by the user.
For external loads recognition, a computer vision approach based on deep learning techniques was used. A camera with a sufficient field of view and resolution to perform image classification was selected. To recognize the carried object, a set of images was recorded and labeled, and a state-of-the-art convolutional neural network (Yolo) was trained on this set. A dictionary containing the inertial properties of each object class of this set was created to provide the necessary information to the inverse dynamics algorithm.
In addition to the information about the carried object, this algorithm takes as inputs the positions, velocities, and accelerations of the human bodies (links). These positions are obtained from the inertial sensors, whereas joint velocities and accelerations are obtained through filtering.
To facilitate the integration of the proposed procedure with robot models in a human-robot interaction task, a modular environment i.e. ROS was selected.
In the perspective of using this procedure in human-robot interaction tasks which could require real-time constraints, and after a few tests with a simple robotic arm, it was noted that Gazebo's dynamics simulator was a suitable choice to solve the inverse dynamics of the human body.
Therefore, an algorithm for the biomechanical analysis of the human was studied. This algorithm is implemented in C++ and it exploits the ROS/OROCOS KDL libraries (package). Since Orocos-KDL requires URDF files for the definition of the mechanism and the data coming from the inertial sensors are in a BVH format, MATLAB scripts to perform the porting from BVH to URDF were created.
Although the implemented algorithms are applied to the human under the assumption of no contacts with the external world, except for the feet, the proposed method applies to more complex situations.
Indeed this method allows for applying loads to any of the bodies composing the mechanism (not necessarily a human) while allowing the user to provide policies to solve the redundancies, that occur when multiple contact points with the environment are present.
For the development and validation of the method, three activities were carried out. In the first, images of the objects to be recognized were recorded and labeled to train and test the object detection algorithm. In the second, human motion was simulated by applying a known trajectory to test the inverse dynamics algorithm. In the last, a participant was equipped with the body sensor network and gathered data were used as input of the proposed method.
Object detection proved to be robust and solid in the (laboratory) experimental condition, with percentages of correct associations higher than 77% for all the classes and a false negative rate smaller than 6%. The human motion simulation test showed that the proposed method provides the correct wrenches in the different support conditions while taking into account the humans' dynamics. Finally, the final tests showed that the upper body wrenches are correctly computed, whereas the lower body wrenches suffer from a gait segmentation problem that was not addressed in this prototyping phase.
File