Tesi etd-03222017-132528 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
ANDREANI, LORENZO
URN
etd-03222017-132528
Titolo
Study and developing of a prototype for hip replacement procedures simulation
Settore scientifico disciplinare
MED/33
Corso di studi
SCIENZE CLINICHE E TRASLAZIONALI
Relatori
tutor Prof. Lisanti, Michele
Parole chiave
- augmented reality
- Hip replacement
- procedures simulation
- prototype
Data inizio appello
25/04/2017
Consultabilità
Non consultabile
Data di rilascio
25/04/2057
Riassunto
The idea of integrating the surgeon’s perceptive efficiency with the aid of new AR visualization modalities has become a dominant topic of academic and industrial research in the medical domain since the 90’s. From the beginning, AR technology appeared to represent a significant development in the context of image-guided surgery, because it aimed to contextually integrate surgical navigation with virtual planning.
Particularly in the real of surgical navigation, the quality of the AR experience affects the degree of acceptance among physicians and it depends on how well the virtual content is integrated into the real world spatially, pho-tometrically and temporally. In this regard, wearable systems based on head-mounted displays (HMDs), offer the most ergonomic and easily translatable solution for all those surgical procedures manually performed by the surgeon under direct vision. This is due to the fact that they intrinsically provide the surgeon with an egocentric viewpoint of the augmented scene and they do not limit his/her freedom of movement around the patient.
However, there have been and still are few technological and human-factor reasons why such systems encounter difficulty in being routinely integrated into the surgical workflow. From a general perspective, concerns remain on the tradeoff between technological and human-factor burdens on one side, and proven benefits deriving from the adoption of this new technology on the other side.
In this thesis, it is motivated why a stereoscopic video see-through display is the most effective wearable solution from a technological and ergonomic standpoint, and it is introduced a registration method that relies on a video-based tracking modality designed for applications in a clinical scenario, wherever rigid anatomies are involved (e.g. orthopaedic surgery, maxillofacial surgery, etc).
The proposed video tracking solution does not require the introduction of obtrusive external trackers into the operating room. It solely relies on the stereo localization of three monochromatic markers rigidly constrained to the surgical scene. Small spherical markers can be conveniently anchored to the patient’s body and/or around the surgical area without compromising the surgeon’s field of view. Further, the use of three indistinguishable markers enables to achieve high robustness of the video-based tracking approach, also in presence of non-controllable lighting conditions. The algorithm provides sub-pixel registration accuracy thanks to a two-stage method for camera pose estimation. This accurate and robust registration solution is suitable to being employed in ergonomic wearable AR systems that solely comprise off-the-shelf components: a personal computer, and a video see-through HMD (composed of a commercial HMD and a pair of external USB cameras).
From a human-factor standpoint, video see-through HMDs raise issues related to the user’s interaction with the augmented content and to some perceptual conflicts. With stereoscopic video see-through HMDs, the user can perceive relative depths between real and/or virtual objects by providing consistent binocular disparity information in the recorded images delivered to the left and right eyes by the two displays of the visor. Unfortunately, it is almost impossible to perfectly mimic the perceptive efficiency of the human visual system when designing optimal stereoscopic displays. This is why, such systems often bring to perceptual/visual artifacts when used to interact with the 3D world.
Among these artifacts, the most significant one is related to diplopia or perceptual discomfort for the user, that may arise if the fixation point, determined by the intersection of the optical axes of the stereo camera pair, leads to reduce stereo overlaps since a large portion of the scene is not represented on both images. This perceptual problem limits the working distance of traditional stereoscopic video see-through HMDs with fixed configuration of the stereo setting.
In this thesis, it is showed a solution to cope with this problem through a matched hardware/software solution. To restore stereo overlap, and reduce image disparities within the binocular fusional area, the degree of convergence of the stereo camera pair is adjusted so to be adapted at different and predefined working distances. For each set of prede-fined focus/vergence configurations, the intrinsic and extrinsic camera parameters associated with it can be determined offline as a result of a one-time calibration routine, with the calibration data stored for subsequent reuse. The accuracy and the robustness of the two-stages video-based pose estimation algorithm above introduced, allows sub-pixel registration accuracy without requiring additional work to the user (i.e. further calibrations).
Additionally, the artificial reproduction of the perceptive efficiency of the human visual system in AR-based surgical navigators, heavily affects the surgeon’s interaction with these new visualization modalities. On this issue, unreliable modalities of AR Visualization can bring cognitive overload and perceptual conflicts causing misinterpretation and hindering clinical decision-making.
To address this issue, in this study a new visualization processing modality is introduced, named h-PnP, that allows the definition of a task-oriented and ergonomic interaction paradigm aimed to improve image-guidance in maxillofacial surgery and/or in orthopaedic surgery. The interaction paradigm is well suited to guiding the accurate placement of rigid bodies in space.
Some clinical in vitro studies are presented, in maxillofacial and in orthopaedic surgery, where the realiability of the HMD for surgical navigation was tested in conjunction with task-oriented and/or ergonomic AR visualization modalities.
The positive results obtained suggest that wearable video see-through AR displays can be considered as accurate, robust, intuitive, and versatile devices. Their efficacy can significantly improve the postoperative surgical outcome
The next appropriate steps for translating research results into a product, will be to proceeding to testing on humans to assess, under real clinical conditions, surgical accuracy and benefits for the patient. This clinical validation will be fundamental for the engineering of the prototype.
Particularly in the real of surgical navigation, the quality of the AR experience affects the degree of acceptance among physicians and it depends on how well the virtual content is integrated into the real world spatially, pho-tometrically and temporally. In this regard, wearable systems based on head-mounted displays (HMDs), offer the most ergonomic and easily translatable solution for all those surgical procedures manually performed by the surgeon under direct vision. This is due to the fact that they intrinsically provide the surgeon with an egocentric viewpoint of the augmented scene and they do not limit his/her freedom of movement around the patient.
However, there have been and still are few technological and human-factor reasons why such systems encounter difficulty in being routinely integrated into the surgical workflow. From a general perspective, concerns remain on the tradeoff between technological and human-factor burdens on one side, and proven benefits deriving from the adoption of this new technology on the other side.
In this thesis, it is motivated why a stereoscopic video see-through display is the most effective wearable solution from a technological and ergonomic standpoint, and it is introduced a registration method that relies on a video-based tracking modality designed for applications in a clinical scenario, wherever rigid anatomies are involved (e.g. orthopaedic surgery, maxillofacial surgery, etc).
The proposed video tracking solution does not require the introduction of obtrusive external trackers into the operating room. It solely relies on the stereo localization of three monochromatic markers rigidly constrained to the surgical scene. Small spherical markers can be conveniently anchored to the patient’s body and/or around the surgical area without compromising the surgeon’s field of view. Further, the use of three indistinguishable markers enables to achieve high robustness of the video-based tracking approach, also in presence of non-controllable lighting conditions. The algorithm provides sub-pixel registration accuracy thanks to a two-stage method for camera pose estimation. This accurate and robust registration solution is suitable to being employed in ergonomic wearable AR systems that solely comprise off-the-shelf components: a personal computer, and a video see-through HMD (composed of a commercial HMD and a pair of external USB cameras).
From a human-factor standpoint, video see-through HMDs raise issues related to the user’s interaction with the augmented content and to some perceptual conflicts. With stereoscopic video see-through HMDs, the user can perceive relative depths between real and/or virtual objects by providing consistent binocular disparity information in the recorded images delivered to the left and right eyes by the two displays of the visor. Unfortunately, it is almost impossible to perfectly mimic the perceptive efficiency of the human visual system when designing optimal stereoscopic displays. This is why, such systems often bring to perceptual/visual artifacts when used to interact with the 3D world.
Among these artifacts, the most significant one is related to diplopia or perceptual discomfort for the user, that may arise if the fixation point, determined by the intersection of the optical axes of the stereo camera pair, leads to reduce stereo overlaps since a large portion of the scene is not represented on both images. This perceptual problem limits the working distance of traditional stereoscopic video see-through HMDs with fixed configuration of the stereo setting.
In this thesis, it is showed a solution to cope with this problem through a matched hardware/software solution. To restore stereo overlap, and reduce image disparities within the binocular fusional area, the degree of convergence of the stereo camera pair is adjusted so to be adapted at different and predefined working distances. For each set of prede-fined focus/vergence configurations, the intrinsic and extrinsic camera parameters associated with it can be determined offline as a result of a one-time calibration routine, with the calibration data stored for subsequent reuse. The accuracy and the robustness of the two-stages video-based pose estimation algorithm above introduced, allows sub-pixel registration accuracy without requiring additional work to the user (i.e. further calibrations).
Additionally, the artificial reproduction of the perceptive efficiency of the human visual system in AR-based surgical navigators, heavily affects the surgeon’s interaction with these new visualization modalities. On this issue, unreliable modalities of AR Visualization can bring cognitive overload and perceptual conflicts causing misinterpretation and hindering clinical decision-making.
To address this issue, in this study a new visualization processing modality is introduced, named h-PnP, that allows the definition of a task-oriented and ergonomic interaction paradigm aimed to improve image-guidance in maxillofacial surgery and/or in orthopaedic surgery. The interaction paradigm is well suited to guiding the accurate placement of rigid bodies in space.
Some clinical in vitro studies are presented, in maxillofacial and in orthopaedic surgery, where the realiability of the HMD for surgical navigation was tested in conjunction with task-oriented and/or ergonomic AR visualization modalities.
The positive results obtained suggest that wearable video see-through AR displays can be considered as accurate, robust, intuitive, and versatile devices. Their efficacy can significantly improve the postoperative surgical outcome
The next appropriate steps for translating research results into a product, will be to proceeding to testing on humans to assess, under real clinical conditions, surgical accuracy and benefits for the patient. This clinical validation will be fundamental for the engineering of the prototype.
File
Nome file | Dimensione |
---|---|
La tesi non è consultabile. |