» Articles » PMID: 36289266

Artificial Intelligence-based Methods for Fusion of Electronic Health Records and Imaging Data

Overview
Journal Sci Rep
Specialty Science
Date 2022 Oct 26
PMID 36289266
Authors
Affiliations
Soon will be listed here.
Abstract

Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them-a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.

Citing Articles

Biases in Artificial Intelligence Application in Pain Medicine.

Jumreornvong O, Perez A, Malave B, Mozawalla F, Kia A, Nwaneshiudu C J Pain Res. 2025; 18:1021-1033.

PMID: 40041672 PMC: 11878133. DOI: 10.2147/JPR.S495934.


Artificial Intelligence for Neuroimaging in Pediatric Cancer.

Dalboni da Rocha J, Lai J, Pandey P, Myat P, Loschinskey Z, Bag A Cancers (Basel). 2025; 17(4).

PMID: 40002217 PMC: 11852968. DOI: 10.3390/cancers17040622.


A New Smartphone-Based Cognitive Screening Battery for Multiple Sclerosis (icognition): Validation Study.

Denissen S, Van Laethem D, Baijot J, Costers L, Descamps A, Van Remoortel A J Med Internet Res. 2025; 27:e53503.

PMID: 39832354 PMC: 11791456. DOI: 10.2196/53503.


Comparison of Intratumoral and Peritumoral Deep Learning, Radiomics, and Fusion Models for Predicting KRAS Gene Mutations in Rectal Cancer Based on Endorectal Ultrasound Imaging.

Gan Y, Hu Q, Shen Q, Lin P, Qian Q, Zhuo M Ann Surg Oncol. 2024; 32(4):3019-3030.

PMID: 39690384 DOI: 10.1245/s10434-024-16697-5.


Advancing healthcare through multimodal data fusion: a comprehensive review of techniques and applications.

Teoh J, Dong J, Zuo X, Lai K, Hasikin K, Wu X PeerJ Comput Sci. 2024; 10:e2298.

PMID: 39650483 PMC: 11623190. DOI: 10.7717/peerj-cs.2298.


References
1.
Jonas J, Aung T, Bourne R, Bron A, Ritch R, Panda-Jonas S . Glaucoma. Lancet. 2017; 390(10108):2183-2193. DOI: 10.1016/S0140-6736(17)31469-1. View

2.
Huang S, Pareek A, Seyyedi S, Banerjee I, Lungren M . Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digit Med. 2020; 3:136. PMC: 7567861. DOI: 10.1038/s41746-020-00341-z. View

3.
Reda I, Khalil A, Elmogy M, Abou El-Fetouh A, Shalaby A, Abou El-Ghar M . Deep Learning Role in Early Diagnosis of Prostate Cancer. Technol Cancer Res Treat. 2018; 17:1533034618775530. PMC: 5972199. DOI: 10.1177/1533034618775530. View

4.
Grant M, Booth A . A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009; 26(2):91-108. DOI: 10.1111/j.1471-1842.2009.00848.x. View

5.
Johnson A, Pollard T, Berkowitz S, Greenbaum N, Lungren M, Deng C . MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci Data. 2019; 6(1):317. PMC: 6908718. DOI: 10.1038/s41597-019-0322-0. View