logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-06122020-102943


Tipo di tesi
Tesi di laurea magistrale
Autore
DE BENEDETTI, MATTEO
URN
etd-06122020-102943
Titolo
Parametric Performance Characterization of Visual Odometry for the Sample Fetch Rover
Dipartimento
INGEGNERIA DELL'INFORMAZIONE
Corso di studi
INGEGNERIA ROBOTICA E DELL'AUTOMAZIONE
Relatori
relatore Prof. Innocenti, Mario
Parole chiave
  • rover
  • Visual Odometry
  • computer vision
  • robotics
Data inizio appello
16/07/2020
Consultabilità
Tesi non consultabile
Riassunto
The search for life in the universe has always driven human curiosity, and the most logical place to look for it, other than our home planet, is Mars.
Venturing in an alien planet is no easy task and poses great challenges and dangers to anyone who would dare explore its unforgiving environment.
This is where robotic missions come to use: they can accomplish great scientific results for a fraction of the cost, complexity and with no risks, while allowing us to pave the way for future human exploration.

The thesis will focus on the Sample Fetch Rover (SFR) scenario, a relevant part of the upcoming Mars Sample Return mission, where the European Space Agency (ESA), in collaboration with the National Aeronautic and Space Administration (NASA), will try to bring back to Earth samples from Mars.
The SFR has many differences with respect to ExoMars, and particular attention will be given to how they could affect the Visual Odometry (VO), which is currently the main component in the GNC architecture of ExoMars.
To investigate the possibility of transferring the VO to SFR, the work will evolve in two main phases.
First, establish a performance baseline for the VO implemented in one of the rovers available in the Planetary Robotics Laboratory (PRL) at the European Space Reseach Centre (ESTEC) site of ESA.
Then, evaluate the effect, with respect to said baseline, of the previously identified parameters on the VO performances, identifying the most relevant factors and proposing some solutions to face them.

As previously mentioned, Visual Odometry is the main component of the ExoMars localization and it is planned to be used also for the Sample Fetch Rover.
Nevertheless, there are many differences between the two mission, the rovers' designs and how they will operate.
The objective of the thesis is to investigate those differences, trying to replicate them as accurately as possible in the Mars Test Bed of the PRL, characterize their effect on the VO performances, and where possible suggest some actions to mitigate them.

The Mars Sample Return mission architecture places indeed a number of challenging requirements on the SFR, extending beyond those firstly defined in the assessment study.
The start of the SFR surface mission coincides with the Martian dust storm season, which means the SFR will need to able to maintain the same performances across a much higher optical depth.
The rover is also required to traverse up to 15 km within 150 sols, this raised the required traverse speed of SFR, which at the moment is planned to be around 6.7 cm/s in contrast with the 1cm/s designed from ExoMars.

One of the main differences of SFR from ExoMars is the considerably higher translational velocity. The higher speed alone could strongly affect the VO by amplifying the estimation errors and increasing the drift rate. It also affects a large number of factors of the VO process.
A higher speed would increase the spatial distance and reduce the amount of image overlap between two frames. This is likely to cause a decrease in the motion estimation accuracy, up to the point of making it impossible if the overlap becomes too small and so does the number of common features between two consecutive frames.
At the same time, the spatial relationship between the two consecutive frames is also strongly influenced by the frequency the VO runs at and the camera position and orientation.
A camera that is higher or points directly forward will result in images with much more overlap and many far away features, while a camera aimed more at the ground will lead to a smaller overlap and considerably closer features. It is hard to say a priori which of the two options is better, and most likely the best choice lies in between them.

Additionally, the rover's velocity changes the {blurriness of the images. A high speed traverse leads to a blurred image with very noisy features, making it hard to detect them, match them between images and also have a good precision in the position they are detected at.
These three factors would of course worsen the VO performances, but they could also be mitigated acting on the exposure time of the camera. A fast exposure time will result in darker but sharper images, which would reduce the effect of the motion blur.

It is important to consider that shortening the exposure time comes with consequences that cannot be ignored, in particular for the SFR scenario, where the rover will have to function at much higher optical depths than ExoMars since the MSR mission is likely to fall during the Mars dust storms period.
If the ambient light and visibility are already low, then the exposure time cannot be reduced too much, otherwise it will result in very dark images, on which it would be almost impossible to run the VO.

One more interesting aspect to be investigated for the SFR scenario is the terrain. Having to traverse a significantly longer distance is bound to bring the rover on new and different terrains very often. The localization has to be robust to these changes.
Also, reference images of the area near the SFR landing site have become recently available and they show a very rough ground, made of flat fractured plates with sand all around and partially over them. This kind of terrain will represent a challenge for the small rover that will have to climb over this plates, while trying to keep the planned high speed and good localization performances.

Lastly, for any rover, the VO is not running as an independent program, it is usually implemented in much bigger and complex Guidance, Navigation and Control architectures that take care of many functions: localization, both relative and global, obstacle detection and avoidance, motion planning and mapping.
File