Tesi etd-05092013-174859 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
DI CORATO, FRANCESCO
URN
etd-05092013-174859
Titolo
A Unified Framework for Constrained Visual-Inertial Navigation with Guaranteed Convergence
Settore scientifico disciplinare
ING-INF/04
Corso di studi
AUTOMATICA, ROBOTICA E BIOINGEGNERIA
Relatori
tutor Dott. Pollini, Lorenzo
tutor Prof. Innocenti, Mario
tutor Prof. Innocenti, Mario
Parole chiave
- computer vision
- constrained kalman filter
- epipolar constraints
- observability
- optimal marker association
- robust pose estimation
- visual-inertial navigation
Data inizio appello
21/06/2013
Consultabilità
Completa
Riassunto
This Thesis focuses on some challenging problems in applied Computer Vision: motion estimation of a vehicle by fusing measurements coming from a low-accuracy Inertial Measurement Unit (IMU) and a Stereo Vision System (SVS), and the robust motion estimation of an object moving in front of a camera by using probabilistic techniques.
In the first problem, a vehicle supposed moving in an unstructured environment is considered. The vehicle is equipped with a stereo vision system and an inertial measurements unit. For the purposes of the work, unstructured environment means that no prior knowledge is available about the scene being observed, nor about the motion. For the goal of sensor fusion, the work relies on the use of epipolar constraints as output maps in a loose-coupling approach of the measurements provided by the two sensor suites. This means that the state vector does not contain any information about the environment and associated keypoints being observed and its dimension is kept constant along the whole estimation task. The observability analysis is proposed in order to define the asymptotic convergence properties of the parameter estimates and the motion requirements for full observability of the system. It will be shown that the existing techniques of visual-inertial navigation that rely on (features-based) visual constraints can be unified under such convergence properties. Simulations and experimental results are summarized that confirm the theoretical conclusions.
In the second problem, the motion estimation algorithm takes advantage from the knowledge of the geometry of the tracked object. Similar problems are encountered for example in the framework of autonomous formation flight and aerial refueling, relative localization with respect to known objects and/or patterns, and so on. The problem is challenged with respect to the classical literature, because it is assumed that the system does not know a priori the association between measurements and projections of the visible parts of the object and reformulates the problem (usually solved via algebraic techniques or iterative optimizations) into a stochastic nonlinear filtering framework. The system is designed to be robust with respect to outliers contamination in the data and object occlusions. The approach is demonstrated with the problem of hand palm pose estimation and motion tracking during reach-and-grasp operations and the related results are presented.
In the first problem, a vehicle supposed moving in an unstructured environment is considered. The vehicle is equipped with a stereo vision system and an inertial measurements unit. For the purposes of the work, unstructured environment means that no prior knowledge is available about the scene being observed, nor about the motion. For the goal of sensor fusion, the work relies on the use of epipolar constraints as output maps in a loose-coupling approach of the measurements provided by the two sensor suites. This means that the state vector does not contain any information about the environment and associated keypoints being observed and its dimension is kept constant along the whole estimation task. The observability analysis is proposed in order to define the asymptotic convergence properties of the parameter estimates and the motion requirements for full observability of the system. It will be shown that the existing techniques of visual-inertial navigation that rely on (features-based) visual constraints can be unified under such convergence properties. Simulations and experimental results are summarized that confirm the theoretical conclusions.
In the second problem, the motion estimation algorithm takes advantage from the knowledge of the geometry of the tracked object. Similar problems are encountered for example in the framework of autonomous formation flight and aerial refueling, relative localization with respect to known objects and/or patterns, and so on. The problem is challenged with respect to the classical literature, because it is assumed that the system does not know a priori the association between measurements and projections of the visible parts of the object and reformulates the problem (usually solved via algebraic techniques or iterative optimizations) into a stochastic nonlinear filtering framework. The system is designed to be robust with respect to outliers contamination in the data and object occlusions. The approach is demonstrated with the problem of hand palm pose estimation and motion tracking during reach-and-grasp operations and the related results are presented.
File
Nome file | Dimensione |
---|---|
Di_Corat...hesis.pdf | 2.55 Mb |
Contatta l’autore |