Mohammad Ali Jan Ghasab
Monash University
Student

Andrew Paplinski
Associate Professor
Faculty of Information Technology, Monash University

John Betts
Senior Lecturer
Faculty of Information Technology, Monash University

Hayley Reynolds
Research Fellow
Peter MacCallum Cancer Centre

Annette Haworth
Professor of Medical Physics
School of Physics, The University of Sydney

Background and Purpose:

Visualization of the prostate is being used increasingly to improve the accuracy of biopsies and seed placement during brachytherapy. One approach is to create a 3D model of the patient’s prostate from a stack of MR images pre-operatively and to fuse this inter-operatively with real time TRUS imaging.

The current practice of manually segmenting the 2D MR images pre-operatively and manual co-registration with TRUS images inter-operatively requires a very high level of expertise and is time-consuming.

In order to improve on current practice, we propose to create an augmented reality 3D prostate model automatically from the pre-operative MR images and real time TRUS, suitable for a clinician to use inter-operatively.

To achieve this, we improve upon the current state of the art in prostate boundary detection and 3D model creation for MR and TRUS imaging, and introduce fast optimisation models for the deformable registration of the 3D prostate models created in each modality.

Methods:

Beginning with the patient’s pre-operative 2D MR images we create a 3D voxel-based image stack from which a 3D model describing the patient’s prostate surface and interior is created directly by fitting a 3D Active Appearance Model (AAM) [1] using a Modified Inverse Compositional Model Alignment (ICMA) approach. This forms the MR reference model. TRUS images, made intra-operatively, are then used to update the 3D reference model in real time. This enables the detailed MR image data to be fused with real-time TRUS image in any plane. In this way we create a realistic 3D deformable prostate model, which can provide full volumetric and surface information to surgeons inter-operatively.

Results and Conclusion:

We have addressed two necessary steps in achieving the augmented reality model outlined above in the following ways: We have compared our 3D segmentation method with the best results presented in the MICCAI competition [2] and achieved better 3D segmentation accuracy with reduced computational time. Our results also show that the ICMA approach also produces segmentation results for TRUS slices to a high degree of accuracy and computational efficiency.

References:

[1] Mitchell, Steven C et al. “3-D active appearance models: segmentation of cardiac MR and ultrasound images,” in IEEE Transactions on Medical Imaging, vol. 21, no. 9, pp. 1167-1178, Sept. 2002.

[2] Litjens, Geert et al. “Evaluation of Prostate Segmentation Algorithms for MRI: The PROMISE12 Challenge.” Medical image analysis 18.2 (2014): 359–373. PMC. Web. 14 Nov. 2016.


← Back to all abstracts