» Articles » PMID: 38608915

Leveraging Generative AI for Clinical Evidence Synthesis Needs to Ensure Trustworthiness

Overview
Journal J Biomed Inform
Publisher Elsevier
Date 2024 Apr 12
PMID 38608915
Authors
Affiliations
Soon will be listed here.
Abstract

Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence. The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information. Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task. However, developing accountable, fair, and inclusive models remains a complicated undertaking. In this perspective, we discuss the trustworthiness of generative AI in the context of automated summarization of medical evidence.

Citing Articles

Evaluating a large language model's ability to answer clinicians' requests for evidence summaries.

Blasingame M, Koonce T, Williams A, Giuse D, Su J, Krump P J Med Libr Assoc. 2025; 113(1):65-77.

PMID: 39975503 PMC: 11835037. DOI: 10.5195/jmla.2025.1985.


Semi-supervised learning from small annotated data and large unlabeled data for fine-grained Participants, Intervention, Comparison, and Outcomes entity recognition.

Chen F, Zhang G, Fang Y, Peng Y, Weng C J Am Med Inform Assoc. 2025; 32(3):555-565.

PMID: 39823371 PMC: 11833487. DOI: 10.1093/jamia/ocae326.


Demystifying Large Language Models for Medicine: A Primer.

Jin Q, Wan N, Leaman R, Tian S, Wang Z, Yang Y ArXiv. 2025; .

PMID: 39801619 PMC: 11722506.


The doctor will polygraph you now.

Anibal J, Gunkel J, Awan S, Huth H, Nguyen H, Le T Unknown. 2025; 1(1):1.

PMID: 39759269 PMC: 11698301. DOI: 10.1038/s44401-024-00001-4.


Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training.

Gupta N, Khatri K, Malik Y, Lakhani A, Kanwal A, Aggarwal S BMC Med Educ. 2024; 24(1):1544.

PMID: 39732679 PMC: 11681633. DOI: 10.1186/s12909-024-06592-8.


References
1.
Bromme R, Mede N, Thomm E, Kremer B, Ziegler R . An anchor in troubled times: Trust in science before and within the COVID-19 pandemic. PLoS One. 2022; 17(2):e0262823. PMC: 8827432. DOI: 10.1371/journal.pone.0262823. View

2.
Chen Z, Liu H, Liao S, Bernard M, Kang T, Stewart L . Representation and Normalization of Complex Interventions for Evidence Computing. Stud Health Technol Inform. 2022; 290:592-596. DOI: 10.3233/SHTI220146. View

3.
Chang T, Sjoding M, Wiens J . Disparate Censorship & Undertesting: A Source of Label Bias in Clinical Machine Learning. Proc Mach Learn Res. 2023; 182:343-390. PMC: 10162497. View

4.
Turfah A, Liu H, Stewart L, Kang T, Weng C . Extending PICO with Observation Normalization for Evidence Computing. Stud Health Technol Inform. 2022; 290:268-272. DOI: 10.3233/SHTI220076. View

5.
Gershman B, Guo D, Dahabreh I . Using observational data for personalized medicine when clinical trial evidence is limited. Fertil Steril. 2018; 109(6):946-951. DOI: 10.1016/j.fertnstert.2018.04.005. View