» Articles » PMID: 34441318

TransMed: Transformers Advance Multi-Modal Medical Image Classification

Overview
Specialty Radiology
Date 2021 Aug 27
PMID 34441318
Citations 62
Authors
Affiliations
Soon will be listed here.
Abstract

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.

Citing Articles

Partial Attention in Global Context and Local Interaction for Addressing Noisy Labels and Weighted Redundancies on Medical Images.

Nguyen M, Phan Tran M, Nakano T, Tran T, Nguyen Q Sensors (Basel). 2025; 25(1.

PMID: 39796954 PMC: 11722591. DOI: 10.3390/s25010163.


A deep learning based smartphone application for early detection of nasopharyngeal carcinoma using endoscopic images.

Yue Y, Zeng X, Lin H, Xu J, Zhang F, Zhou K NPJ Digit Med. 2024; 7(1):384.

PMID: 39738998 PMC: 11685909. DOI: 10.1038/s41746-024-01403-2.


Enhanced brain tumor diagnosis using combined deep learning models and weight selection technique.

Gasmi K, Ben Aoun N, Alsalem K, Ltaifa I, Alrashdi I, Ben Ammar L Front Neuroinform. 2024; 18:1444650.

PMID: 39659489 PMC: 11628532. DOI: 10.3389/fninf.2024.1444650.


Large-scale long-tailed disease diagnosis on radiology images.

Zheng Q, Zhao W, Wu C, Zhang X, Dai L, Guan H Nat Commun. 2024; 15(1):10147.

PMID: 39578456 PMC: 11584732. DOI: 10.1038/s41467-024-54424-6.


A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies.

Kim S, Park H, Park S Biomed Eng Lett. 2024; 14(6):1221-1242.

PMID: 39465106 PMC: 11502678. DOI: 10.1007/s13534-024-00425-9.


References
1.
Dolz J, Gopinath K, Yuan J, Lombaert H, Desrosiers C, Ben Ayed I . HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation. IEEE Trans Med Imaging. 2018; 38(5):1116-1126. DOI: 10.1109/TMI.2018.2878669. View

2.
Liu Y, Yang G, Hosseiny M, Azadikhah A, Afshari Mirak S, Miao Q . Exploring Uncertainty Measures in Bayesian Deep Attentive Neural Networks for Prostate Zonal Segmentation. IEEE Access. 2021; 8:151817-151828. PMC: 7869831. DOI: 10.1109/ACCESS.2020.3017168. View

3.
Nie D, Wang L, Gao Y, Shen D . FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION. Proc IEEE Int Symp Biomed Imaging. 2016; 2016:1342-1345. PMC: 5031138. DOI: 10.1109/ISBI.2016.7493515. View

4.
Jiang J, Zhu L, Chen W, Chen L, Su G, Xu X . Added value of susceptibility-weighted imaging to diffusion-weighted imaging in the characterization of parotid gland tumors. Eur Arch Otorhinolaryngol. 2020; 277(10):2839-2846. DOI: 10.1007/s00405-020-05985-x. View

5.
Ibtehaz N, Sohel Rahman M . MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2019; 121:74-87. DOI: 10.1016/j.neunet.2019.08.025. View