Automatic landmark correspondence detection in medical images with an application to deformable image registration

Monika Grewal*, Jan Wiersma, Henrike Westerveld, Peter A.N. Bosman*, Tanja Alderliesten

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)
30 Downloads (Pure)

Abstract

Purpose: Deformable image registration (DIR) can benefit from additional guidance using corresponding landmarks in the images. However, the benefits thereof are largely understudied, especially due to the lack of automatic landmark detection methods for three-dimensional (3D) medical images. Approach: We present a deep convolutional neural network (DCNN), called DCNN-Match, that learns to predict landmark correspondences in 3D images in a self-supervised manner. We trained DCNN-Match on pairs of computed tomography (CT) scans containing simulated deformations. We explored five variants of DCNN-Match that use different loss functions and assessed their effect on the spatial density of predicted landmarks and the associated matching errors. We also tested DCNN-Match variants in combination with the open-source registration software Elastix to assess the impact of predicted landmarks in providing additional guidance to DIR. Results: We tested our approach on lower abdominal CT scans from cervical cancer patients: 121 pairs containing simulated deformations and 11 pairs demonstrating clinical deformations. The results showed significant improvement in DIR performance when landmark correspondences predicted by DCNN-Match were used in the case of simulated (p = 0e0) as well as clinical deformations (p = 0.030). We also observed that the spatial density of the automatic landmarks with respect to the underlying deformation affect the extent of improvement in DIR. Finally, DCNN-Match was found to generalize to magnetic resonance imaging scans without requiring retraining, indicating easy applicability to other datasets. Conclusions: DCNN-match learns to predict landmark correspondences in 3D medical images in a self-supervised manner, which can improve DIR performance.

Original languageEnglish
Article number014007
JournalJournal of Medical Imaging
Volume10
Issue number1
DOIs
Publication statusPublished - 23 Feb 2023

Bibliographical note

Funding Information:
The research is part of the research programme, Open Technology Programme with project number 15586, which is financed by the Dutch Research Council (NWO), Elekta, and Xomnia. Further, the work is co-funded by the public-private partnership allowance for top consortia for knowledge and innovation (TKIs) from the Ministry of Economic Affairs. This work was presented (in part) at the Conference of SPIE Medical Imaging 1131303: Image-Guided Procedures, Robotic Interventions, and Modeling (February 15 to February 20, 2020, Houston, Texas, USA). This work was supported by Elekta (Elekta AB, Stockholm, Sweden) and Xomnia (Xomnia B.V., Amsterdam, the Netherlands). Elekta and Xomnia were not involved in the study design, data collection, analysis and interpretation, and writing of this article.

Publisher Copyright:
© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License.

Fingerprint

Dive into the research topics of 'Automatic landmark correspondence detection in medical images with an application to deformable image registration'. Together they form a unique fingerprint.

Cite this