» Articles » PMID: 33309898

The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies

Overview
Journal J Biomed Inform
Publisher Elsevier
Date 2020 Dec 14
PMID 33309898
Citations 99
Authors
Affiliations
Soon will be listed here.
Abstract

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanations (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in health care (e.g. reporting data quality, performing extensive (external) validation, and regulation).

Citing Articles

The Role of Machine Learning Models in Predicting Cirrhosis Mortality: A Systematic Review.

Mohamud K, Elzubair Eltahir S, Ahmed Alhardalo H, Albashir H, Ali Mohamed Zain N, Abdelrahman Ibrahim M Cureus. 2025; 17(1):e78155.

PMID: 40026938 PMC: 11867977. DOI: 10.7759/cureus.78155.


Retinal vascular alterations in cognitive impairment: A multicenter study in China.

Shi Q, Ni A, Li K, Su W, Xie W, Zheng H Alzheimers Dement. 2025; 21(2):e14593.

PMID: 39988572 PMC: 11847650. DOI: 10.1002/alz.14593.


Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts.

Starke G, Gille F, Termine A, Aquino Y, Chavarriaga R, Ferrario A J Med Internet Res. 2025; 27:e56306.

PMID: 39969962 PMC: 11888049. DOI: 10.2196/56306.


Survival machine learning model of T1 colorectal postoperative recurrence after endoscopic resection and surgical operation: a retrospective cohort study.

Li Z, Aihemaiti Y, Yang Q, Ahemai Y, Li Z, Du Q BMC Cancer. 2025; 25(1):262.

PMID: 39953493 PMC: 11827358. DOI: 10.1186/s12885-025-13663-6.


Demystifying the black box: A survey on explainable artificial intelligence (XAI) in bioinformatics.

Budhkar A, Song Q, Su J, Zhang X Comput Struct Biotechnol J. 2025; 27:346-359.

PMID: 39897059 PMC: 11782883. DOI: 10.1016/j.csbj.2024.12.027.