BPI-MVQA: a Bi-branch Model for Medical Visual Question Answering
Overview
Affiliations
Background: Visual question answering in medical domain (VQA-Med) exhibits great potential for enhancing confidence in diagnosing diseases and helping patients better understand their medical conditions. One of the challenges in VQA-Med is how to better understand and combine the semantic features of medical images (e.g., X-rays, Magnetic Resonance Imaging(MRI)) and answer the corresponding questions accurately in unlabeled medical datasets.
Method: We propose a novel Bi-branched model based on Parallel networks and Image retrieval for Medical Visual Question Answering (BPI-MVQA). The first branch of BPI-MVQA is a transformer structure based on a parallel network to achieve complementary advantages in image sequence feature and spatial feature extraction, and multi-modal features are implicitly fused by using the multi-head self-attention mechanism. The second branch is retrieving the similarity of image features generated by the VGG16 network to obtain similar text descriptions as labels.
Result: The BPI-MVQA model achieves state-of-the-art results on three VQA-Med datasets, and the main metric scores exceed the best results so far by 0.2[Formula: see text], 1.4[Formula: see text], and 1.1[Formula: see text].
Conclusion: The evaluation results support the effectiveness of the BPI-MVQA model in VQA-Med. The design of the bi-branch structure helps the model answer different types of visual questions. The parallel network allows for multi-angle image feature extraction, a unique feature extraction method that helps the model better understand the semantic information of the image and achieve greater accuracy in the multi-classification of VQA-Med. In addition, image retrieval helps the model answer irregular, open-ended type questions from the perspective of understanding the information provided by images. The comparison of our method with state-of-the-art methods on three datasets also shows that our method can bring substantial improvement to the VQA-Med system.
A scoping review on multimodal deep learning in biomedical images and texts.
Sun Z, Lin M, Zhu Q, Xie Q, Wang F, Lu Z J Biomed Inform. 2023; 146:104482.
PMID: 37652343 PMC: 10591890. DOI: 10.1016/j.jbi.2023.104482.
Vision-Language Model for Visual Question Answering in Medical Imagery.
Bazi Y, Rahhal M, Bashmal L, Zuair M Bioengineering (Basel). 2023; 10(3).
PMID: 36978771 PMC: 10045796. DOI: 10.3390/bioengineering10030380.