» Articles » PMID: 33256715

Explainability for Artificial Intelligence in Healthcare: a Multidisciplinary Perspective

Overview
Publisher Biomed Central
Date 2020 Dec 1
PMID 33256715
Citations 263
Authors
Affiliations
Soon will be listed here.
Abstract

Background: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.

Methods: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.

Results: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.

Conclusions: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.

Citing Articles

KAN-EEG: towards replacing backbone-MLP for an effective seizure detection system.

Herbozo Contreras L, Cui J, Yu L, Huang Z, Nikpour A, Kavehei O R Soc Open Sci. 2025; 12(3):240999.

PMID: 40078924 PMC: 11898101. DOI: 10.1098/rsos.240999.


Bias recognition and mitigation strategies in artificial intelligence healthcare applications.

Hasanzadeh F, Josephson C, Waters G, Adedinsewo D, Azizi Z, White J NPJ Digit Med. 2025; 8(1):154.

PMID: 40069303 PMC: 11897215. DOI: 10.1038/s41746-025-01503-7.


On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.

Blackman J, Veerapen R BMC Med Inform Decis Mak. 2025; 25(1):111.

PMID: 40045339 PMC: 11881432. DOI: 10.1186/s12911-025-02891-2.


Pretrained transformers applied to clinical studies improve predictions of treatment efficacy and associated biomarkers.

Arango-Argoty G, Kipkogei E, Stewart R, Sun G, Patra A, Kagiampakis I Nat Commun. 2025; 16(1):2101.

PMID: 40025003 PMC: 11873189. DOI: 10.1038/s41467-025-57181-2.


Perspectives and Tools in Liver Graft Assessment: A Transformative Era in Liver Transplantation.

Safi K, Pawlicka A, Pradhan B, Sobieraj J, Zhylko A, Struga M Biomedicines. 2025; 13(2).

PMID: 40002907 PMC: 11852418. DOI: 10.3390/biomedicines13020494.


References
1.
Rudin C . Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat Mach Intell. 2022; 1(5):206-215. PMC: 9122117. DOI: 10.1038/s42256-019-0048-x. View

2.
Grote T, Berens P . On the ethics of algorithmic decision-making in healthcare. J Med Ethics. 2019; 46(3):205-211. PMC: 7042960. DOI: 10.1136/medethics-2019-105586. View

3.
McDougall R . Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. 2018; 45(3):156-160. DOI: 10.1136/medethics-2018-105118. View

4.
Zech J, Badgeley M, Liu M, Costa A, Titano J, Oermann E . Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Med. 2018; 15(11):e1002683. PMC: 6219764. DOI: 10.1371/journal.pmed.1002683. View

5.
Weng S, Reps J, Kai J, Garibaldi J, Qureshi N . Can machine-learning improve cardiovascular risk prediction using routine clinical data?. PLoS One. 2017; 12(4):e0174944. PMC: 5380334. DOI: 10.1371/journal.pone.0174944. View