» Articles » PMID: 31741286

Multimodal 3D Medical Image Registration Guided by Shape Encoder-decoder Networks

Overview
Publisher Springer
Date 2019 Nov 20
PMID 31741286
Citations 10
Authors
Affiliations
Soon will be listed here.
Abstract

Purpose: Nonlinear multimodal image registration, for example, the fusion of computed tomography (CT) and magnetic resonance imaging (MRI), fundamentally depends on a definition of image similarity. Previous methods that derived modality-invariant representations focused on either global statistical grayscale relations or local structural similarity, both of which are prone to local optima. In contrast to most learning-based methods that rely on strong supervision of aligned multimodal image pairs, we aim to overcome this limitation for further practical use cases.

Methods: We propose a new concept that exploits anatomical shape information and requires only segmentation labels for both modalities individually. First, a shape-constrained encoder-decoder segmentation network without skip connections is jointly trained on labeled CT and MRI inputs. Second, an iterative energy-based minimization scheme is introduced that relies on the capability of the network to generate intermediate nonlinear shape representations. This further eases the multimodal alignment in the case of large deformations.

Results: Our novel approach robustly and accurately aligns 3D scans from the multimodal whole-heart segmentation dataset, outperforming classical unsupervised frameworks. Since both parts of our method rely on (stochastic) gradient optimization, it can be easily integrated in deep learning frameworks and executed on GPUs.

Conclusions: We present an integrated approach for weakly supervised multimodal image registration. Achieving promising results due to the exploration of intermediate shape features as registration guidance encourages further research in this direction.

Citing Articles

A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond.

Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A Med Image Anal. 2024; 100():103385.

PMID: 39612808 PMC: 11730935. DOI: 10.1016/j.media.2024.103385.


Evaluation of Machine Learning Classification Models for False-Positive Reduction in Prostate Cancer Detection Using MRI Data.

Rippa M, Schulze R, Kenyon G, Himstedt M, Kwiatkowski M, Grobholz R Diagnostics (Basel). 2024; 14(15).

PMID: 39125553 PMC: 11311676. DOI: 10.3390/diagnostics14151677.


Applications of AI in multi-modal imaging for cardiovascular disease.

Milosevic M, Jin Q, Singh A, Amal S Front Radiol. 2024; 3:1294068.

PMID: 38283302 PMC: 10811170. DOI: 10.3389/fradi.2023.1294068.


Towards full-stack deep learning-empowered data processing pipeline for synchrotron tomography experiments.

Zhang Z, Li C, Wang W, Dong Z, Liu G, Dong Y Innovation (Camb). 2023; 5(1):100539.

PMID: 38089566 PMC: 10711238. DOI: 10.1016/j.xinn.2023.100539.


Dual attention network for unsupervised medical image registration based on VoxelMorph.

Li Y, Tang H, Wang W, Zhang X, Qu H Sci Rep. 2022; 12(1):16250.

PMID: 36171468 PMC: 9519746. DOI: 10.1038/s41598-022-20589-7.


References
1.
Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P . Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging. 1997; 16(2):187-98. DOI: 10.1109/42.563664. View

2.
Maintz J, Viergever M . A survey of medical image registration. Med Image Anal. 2000; 2(1):1-36. DOI: 10.1016/s1361-8415(01)80026-8. View

3.
Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E . Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal. 2018; 49:1-13. PMC: 6742510. DOI: 10.1016/j.media.2018.07.002. View

4.
LeCun Y, Bengio Y, Hinton G . Deep learning. Nature. 2015; 521(7553):436-44. DOI: 10.1038/nature14539. View

5.
Zollei L, Fisher J, Wells W . A unified statistical and information theoretic framework for multi-modal image registration. Inf Process Med Imaging. 2004; 18:366-77. DOI: 10.1007/978-3-540-45087-0_31. View