» Articles » PMID: 39164302

Multi-modal Transformer Architecture for Medical Image Analysis and Automated Report Generation

Overview
Journal Sci Rep
Specialty Science
Date 2024 Aug 20
PMID 39164302
Authors
Affiliations
Soon will be listed here.
Abstract

Medical practitioners examine medical images, such as X-rays, write reports based on the findings, and provide conclusive statements. Manual interpretation of the results and report generation by examiners are time-consuming processes that lead to potential delays in diagnosis. We propose an automated report generation model for medical images leveraging an encoder-decoder architecture. Our model utilizes transformer architectures, including Vision Transformer (ViT) and its variants like Data Efficient Image Transformer (DEiT) and BERT pre-training image transformer (BEiT), as an encoder. These transformers are adapted for processing to extract and gain visual information from medical images. Reports are transformed into text embeddings, and the Generative Pre-trained Transformer (GPT2) model is used as a decoder to generate medical reports. Our model utilizes a cross-attention mechanism between the vision transformer and GPT2, which enables it to create detailed and coherent medical reports based on the visual information extracted by the encoder. In our model, we have extended the report generation with general knowledge, which is independent of the inputs and provides a comprehensive report in a broad sense. We conduct our experiments on the Indiana University X-ray dataset to demonstrate the effectiveness of our models. Generated medical reports from the model are evaluated using word overlap metrics such as Bleu scores, Rouge-L, retrieval augmentation answer correctness, and similarity metrics such as skip thought cs, greedy matching, vector extrema, and RAG answer similarity. Results show that our model is performing better than the recurrent models in terms of report generation, answer similarity, and word overlap metrics. By automating the report generation process and incorporating advanced transformer architectures and general knowledge, our approach has the potential to significantly improve the efficiency and accuracy of medical image analysis and report generation.

Citing Articles

ChestX-Transcribe: a multimodal transformer for automated radiology report generation from chest x-rays.

Singh P, Singh S Front Digit Health. 2025; 7:1535168.

PMID: 39906063 PMC: 11790570. DOI: 10.3389/fdgth.2025.1535168.

References
1.
Liu A, Guo Y, Yong J, Xu F . Multi-Grained Radiology Report Generation With Sentence-Level Image-Language Contrastive Learning. IEEE Trans Med Imaging. 2024; 43(7):2657-2669. DOI: 10.1109/TMI.2024.3372638. View

2.
Yang S, Wu X, Ge S, Kevin Zhou S, Xiao L . Knowledge matters: Chest radiology report generation with general and specific knowledge. Med Image Anal. 2022; 80:102510. DOI: 10.1016/j.media.2022.102510. View

3.
Nakaura T, Yoshida N, Kobayashi N, Shiraishi K, Nagayama Y, Uetani H . Preliminary assessment of automated radiology report generation with generative pre-trained transformers: comparing results to radiologist-generated reports. Jpn J Radiol. 2023; 42(2):190-200. PMC: 10811038. DOI: 10.1007/s11604-023-01487-y. View

4.
Yu F, Endo M, Krishnan R, Pan I, Tsai A, Reis E . Evaluating progress in automatic chest X-ray radiology report generation. Patterns (N Y). 2023; 4(9):100802. PMC: 10499844. DOI: 10.1016/j.patter.2023.100802. View

5.
Dalmaz O, Yurt M, Cukur T . ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis. IEEE Trans Med Imaging. 2022; 41(10):2598-2614. DOI: 10.1109/TMI.2022.3167808. View