» Articles » PMID: 36263755

Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence

Overview
Specialty Medical Ethics
Date 2022 Oct 20
PMID 36263755
Authors
Affiliations
Soon will be listed here.
Abstract

Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust in nonhuman agents constitutes a category error and worry about the concept being misused for ethics washing. Proponents of trust have responded to these worries from various angles, disentangling different concepts and aspects of trust in AI, potentially organized in layers or dimensions. Given the substantial disagreements across these accounts of trust and the important worries about ethics washing, we embrace a diverging strategy here. Instead of aiming for a positive definition of the elements and nature of trust in AI, we proceed , that is we look at cases where trust or distrust are misplaced. Comparing these instances with trust expedited in doctor-patient relationships, we systematize these instances and propose a taxonomy of both misplaced trust and distrust. By inverting the perspective and focusing on negative examples, we develop an account that provides useful ethical constraints for decisions in clinical as well as regulatory contexts and that highlights how we should engage with medical AI.

Citing Articles

Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts.

Starke G, Gille F, Termine A, Aquino Y, Chavarriaga R, Ferrario A J Med Internet Res. 2025; 27:e56306.

PMID: 39969962 PMC: 11888049. DOI: 10.2196/56306.


Cultural variation in trust and acceptability of artificial intelligence diagnostics for dementia.

Chandra A, Senthilvel K, Anjum R, Uchegbu I, Smith L, Beaumont H J Alzheimers Dis. 2025; :13872877251319353.

PMID: 39956979 PMC: 7617421. DOI: 10.1177/13872877251319353.


Physicians' ethical concerns about artificial intelligence in medicine: a qualitative study: .

Kahraman F, Aktas A, Bayrakceken S, Cakar T, Tarcan H, Bayram B Front Public Health. 2024; 12:1428396.

PMID: 39664534 PMC: 11631923. DOI: 10.3389/fpubh.2024.1428396.


Navigating the Landscape of Digital Twins in Medicine: A Relational Bioethical Inquiry.

Ferlito B, De Proost M, Segers S Asian Bioeth Rev. 2024; 16(3):471-481.

PMID: 39022372 PMC: 11250715. DOI: 10.1007/s41649-024-00280-x.


Large language models for generating medical examinations: systematic review.

Artsi Y, Sorin V, Konen E, Glicksberg B, Nadkarni G, Klang E BMC Med Educ. 2024; 24(1):354.

PMID: 38553693 PMC: 10981304. DOI: 10.1186/s12909-024-05239-y.