» Articles » PMID: 37806061

The Enlightening Role of Explainable Artificial Intelligence in Medical & Healthcare Domains: A Systematic Literature Review

Overview
Journal Comput Biol Med
Publisher Elsevier
Date 2023 Oct 8
PMID 37806061
Authors
Affiliations
Soon will be listed here.
Abstract

In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.

Citing Articles

Generational differences in healthcare: the role of technology in the path forward.

Cecconi C, Adams R, Cardone A, Declaye J, Silva M, Vanlerberghe T Front Public Health. 2025; 13:1546317.

PMID: 40078753 PMC: 11897013. DOI: 10.3389/fpubh.2025.1546317.


Combining clinical characteristics with CT radiomics to predict Ki67 expression level of small renal mass based on artificial intelligence algorithms.

Lin J, Ou Y, Luo M, Jiang X, Cen S, Zeng G Front Oncol. 2025; 15:1541143.

PMID: 40061892 PMC: 11885116. DOI: 10.3389/fonc.2025.1541143.


The role of explainable artificial intelligence in disease prediction: a systematic literature review and future research directions.

Alkhanbouli R, Matar Abdulla Almadhaani H, Alhosani F, Simsekler M BMC Med Inform Decis Mak. 2025; 25(1):110.

PMID: 40038704 PMC: 11877768. DOI: 10.1186/s12911-025-02944-6.


The diagnostic and prognostic capability of artificial intelligence in spinal cord injury: A systematic review.

Gill S, Subbiah Ponniah H, Giersztein S, Anantharaj R, Namireddy S, Killilea J Brain Spine. 2025; 5:104208.

PMID: 40027293 PMC: 11871462. DOI: 10.1016/j.bas.2025.104208.


Advantages and limitations of large language models for antibiotic prescribing and antimicrobial stewardship.

Giacobbe D, Marelli C, La Manna B, Padua D, Malva A, Guastavino S NPJ Antimicrob Resist. 2025; 3(1):14.

PMID: 40016394 PMC: 11868396. DOI: 10.1038/s44259-025-00084-5.