ETD system

Electronic theses and dissertations repository


Tesi etd-04082019-175452

Thesis type
Tesi di laurea magistrale
An assessment of automatic segmentation of the knee joint based on machine learning
Corso di studi
relatore Controzzi, Marco
relatore Rodriguez y Baena, Ferdinando
controrelatore Ciuti, Gastone
Parole chiave
  • image-based registration
  • convolutional neural networks
  • computer assisted orthopaedic surgery
  • automatic segmentation
  • image-free registration
  • machine learning
  • medical image segmentation
Data inizio appello
Data di rilascio
Riassunto analitico
The creation of an accurate 3D model of the bone and joint is a crucial requirement for any CAOS system, and the basis for any and all surgical planning. This model can be obtained by using image-based or image-free techniques. Image-based techniques rely on the segmentation of pre-operative scans. Segmentation is typically done manually, which is a time-consuming process that requires technical expertise. Image-free techniques were developed to avoid this preoperative step and rely on intraoperative surface scanning and bone morphing algorithms to create a patient-specific model during surgery. Recent advances in machine learning have aided the development of novel segmentation algorithms, which may eventually enable automation of the segmentation process in image-based techniques.
In this work, we assessed the accuracy of digital models of a cadaveric femur obtained using three different systems (manual segmentation, a state-of-the-art machine learning algorithm and a commercial image-free surgical system), by comparing them to high resolution optical scans of the exposed femur, which is used here as the gold standard.
One cadaveric knee was used for this study. Ethical approval was obtained from the Imperial College Healthcare Tissue Bank (project R13066-3A) and the cadaveric knee was sourced from an approved supplier. The intact sample was scanned using a Siemens Spectra MRI system to obtain the images needed for the image-based procedures.
Manual segmentation of the femur bone and cartilage was performed in 3D-Slicer by thresholding, followed by manual adjustment. A 3D model of the segmented volume was then generated using the same software.
The automated segmentation was performed using the Convolutional Neural Network (CNN) architecture designed by Kayalibay et al. The neural network was trained to classify voxels into five categories (femur bone, femur cartilage, tibia bone, tibia cartilage, other) using 70 MRI volumes of the knee, available from the SKI10 challenge. Training required approximately 41 hours on an NVIDIA Tesla K80 Graphic Processing Unit (GPU). After training, the same neural network was used to segment an MRI scan of the specimen. The label maps obtained from the network were then converted into a 3D model using Matlab R2018a.
The knee specimen was prepared for optical surface scanning by attaching it to a custom-made metal rig using bone cement and exposing the distal end of the femur by performing a vertical cut as in a Total Knee Arthroplasty (TKA).
To obtain a gold standard measurement, 12 high resolution scans of the visible part of the femur head were obtained from different positions and orientations, using a Polyga HDI C210 3D Scanner. The scans were then aligned and merged using the FlexScan3D software, available from the camera manufacturer.
The specimen was then digitised using a commercial image-free surgical system. To this end, a reference array was attached to the femur using bone pins and, after standard calibration, the visible part of the femur was digitised using the system’s probe. After this, the system’s log files were downloaded to a USB drive to access the computed surface model.
The 3D models obtained with the different modalities were imported into Matlab as point clouds and were registered to the gold standard using the Iterative Closest Point (ICP) method. To summarise the accuracy of each method, the root mean square error (RMSE) was computed for each modality.
Manual segmentation is confirmed here to be the most accurate method, but it requires expertise and time. Hence, in order to alleviate the task for clinicians, image-free systems have gained in popularity.
Our results show that the accuracy of a state-of-the-art machine-learning algorithm is still worse than with other methods, but inching towards the ballpark figure for image-free systems, albeit with the need for preoperative imaging of the patient, which would add time and cost to a navigated procedure.
These results suggest that machine-learning methods could become a powerful alternative to currently available segmentation techniques. With the attention that these methods have been receiving in recent years, and the rapid development of hardware and software tools for deep learning, the quality and speed of execution of these methods will continue to improve in the coming years, with a consequent impact on their speed, robustness, and overall accuracy.