Sistema ETD

banca dati delle tesi e dissertazioni accademiche elettroniche

 

Tesi etd-12152015-222725


Tipo di tesi
Tesi di dottorato di ricerca
Autore
CUTOLO, FABRIZIO
URN
etd-12152015-222725
Titolo
Wearable Stereoscopic Augmented Reality System for Medical Procedures.
Settore scientifico disciplinare
ING-INF/06
Corso di studi
SCIENZE CLINICHE E TRASLAZIONALI
Commissione
tutor Prof. Ferrari, Vincenzo
tutor Paolo Domenico Parchi
commissario Prof. Ferrari, Mauro
commissario Ing. Tecchia, Franco
Parole chiave
  • Image Guided Surgery
  • Head Mounted Display
  • Augmented Reality
  • Machine Vision
  • Surgical Navigation
Data inizio appello
06/01/2016;
Disponibilità
parziale
Riassunto analitico
The idea of integrating the surgeon’s perceptive efficiency with the aid of new AR visualization modalities has become a dominant topic of academic and industrial research in the medical domain since the 90’s. From the beginning, AR tech-nology appeared to represent a significant development in the context of image-guided surgery, because it aimed to contextually integrate surgical navigation with virtual planning.
Particularly in the realm of surgical navigation, the quality of the AR experience affects the degree of acceptance among physicians and it depends on how well the virtual content is integrated into the real world spatially, photometri-cally and temporally. In this regard, wearable systems based on head-mounted displays (HMDs), offer the most ergo-nomic and easily translatable solution for all those surgical procedures manually performed by the surgeon under di-rect vision. This is due to the fact that they intrinsically provide the surgeon with an egocentric viewpoint of the aug-mented scene and they do not limit his/her freedom of movement around the patient.
However, there have been and still are few technological and human-factor reasons why such systems encounter diffi-culty in being routinely integrated into the surgical workflow. From a general perspective, concerns remain on the tradeoff between technological and human-factor burdens on one side, and proven benefits deriving from the adop-tion of this new technology on the other side.
In this thesis, I motivate why a stereoscopic video see-through display is the most effective wearable solution from a technological and ergonomic standpoint, and I introduce a registration method that relies on a video-based tracking modality designed for applications in a clinical scenario, wherever rigid anatomies are involved (e.g. orthopaedic sur-gery, maxillofacial surgery, ENT surgery, and neurosurgery).
The proposed video tracking solution does not require the introduction of obtrusive external trackers into the operat-ing room. It solely relies on the stereo localization of three monochromatic markers rigidly constrained to the surgical scene. Small spherical markers can be conveniently anchored to the patient’s body and/or around the surgical area without compromising the surgeon’s field of view. Further, the use of three indistinguishable markers enables to achieve high robustness of the video-based tracking approach, also in presence of non-controllable lighting conditions. The algorithm provides sub-pixel registration accuracy thanks to a two-stage method for camera pose estimation. This accurate and robust registration solution is suitable to being employed in ergonomic wearable AR systems that solely comprise off-the-shelf components: a personal computer, and a video see-through HMD (itself composed of a com-mercial HMD and a pair of external USB cameras).
From a human-factor standpoint, video see-through HMDs raise issues related to the user’s interaction with the aug-mented content and to some perceptual conflicts. With stereoscopic video see-through HMDs, the user can perceive relative depths between real and/or virtual objects by providing consistent binocular disparity information in the re-corded images delivered to the left and right eyes by the two displays of the visor. Unfortunately, it is almost impossi-bile to perfectly mimic the perceptive efficiency of the human visual system when designing optimal stereoscopic dis-plays. This is why, such systems often bring to perceptual/visual artifacts when used to interact with the 3D world.
Among these artifacts, the most significant one is related to diplopia or perceptual discomfort for the user, that may arise if the fixation point, determined by the intersection of the optical axes of the stereo camera pair, leads to re-duced stereo overlaps since a large portion of the scene is not represented on both images. This perceptual problem limits the working distance of traditional stereoscopic video see-through HMDs with fixed configuration of the stereo setting.
In this thesis, I propose a solution to cope with this problem through a matched hardware/software solution. To restore stereo overlap, and reduce image disparities within the binocular fusional area, the degree of convergence of the ste-reo camera pair is adjusted so to be adapted at different and predefined working distances. For each set of predefined focus/vergence configurations, the intrinsic and extrinsic camera parameters associated with it can be determined offline as a result of a one-time calibration routine, with the calibration data stored for subsequent reuse. The accuracy and the robustness of the two-stages video-based pose estimation algorithm above introduced, allows sub-pixel regis-tration accuracy without requiring additional work to the user (i.e. further calibrations).
Additionally, the artificial reproduction of the perceptive efficiency of the human visual system in AR-based surgical navigators, heavily affects the surgeon’s interaction with these new visualization modalities. On this issue, unreliable modalities of AR Visualization can bring cognitive overload and perceptual conflicts causing misinterpretation and hin-dering clinical decision-making.
To address this issue, in this study I introduce a new visualization processing modality, named h-PnP, that allows the definition of a task-oriented and ergonomic interaction paradigm aimed to improve image-guidance in maxillofacial surgery and/or in orthopaedic surgery. The interaction paradigm is well suited to guiding the accurate placement of rigid bodies in space.
Three clinical in vitro studies are presented, one in maxillofacial surgery, one in orthopaedic surgery, and one in neu-rosurgery, where the realiability of the HMD for surgical navigation was tested in conjunction with task-oriented and/or ergonomic AR visualization modalities.
The positive results obtained suggest that wearable video see-through AR displays can be considered as accurate, ro-bust, intuitive, and versatile devices. Their efficacy can significantly improve the postoperative surgical outcome
The next appropriate steps for translating research results into a product, will be to proceeding to testing on humans to assess, under real clinical conditions, surgical accuracy and benefits for the patient. This clinical validation will be fun-damental for the engineering of the current prototype, hence for translating the results of my research into a reliable surgical tool.
File