» Articles » PMID: 36384545

"I Don't Think People Are Ready to Trust These Algorithms at Face Value": Trust and the Use of Machine Learning Algorithms in the Diagnosis of Rare Disease

Overview
Journal BMC Med Ethics
Publisher Biomed Central
Specialty Medical Ethics
Date 2022 Nov 17
PMID 36384545
Authors
Affiliations
Soon will be listed here.
Abstract

Background: As the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.

Methods: Semi-structured qualitative interviews (n = 20) with stakeholders (clinical geneticists, data scientists, bioinformaticians, industry and patient support group spokespersons) who design and/or work with computational phenotyping (CP) systems. The method of constant comparison was used to analyse the interview data.

Results: Interviewees emphasized the importance of establishing trust in the use of CP technology in identifying rare diseases. Trust was formulated in two interrelated ways in these data. First, interviewees talked about the importance of using CP tools within the context of a trust relationship; arguing that patients will need to trust clinicians who use AI tools and that clinicians will need to trust AI developers, if they are to adopt this technology. Second, they described a need to establish trust in the technology itself, or in the knowledge it provides-epistemic trust. Interviewees suggested CP tools used for the diagnosis of rare diseases might be perceived as more trustworthy if the user is able to vouchsafe for the technology's reliability and accuracy and the person using/developing them is trusted.

Conclusion: This study suggests we need to take deliberate and meticulous steps to design reliable or confidence-worthy AI systems for use in healthcare. In addition, we need to devise reliable or confidence-worthy processes that would give rise to reliable systems; these could take the form of RCTs and/or systems of accountability transparency and responsibility that would signify the epistemic trustworthiness of these tools. words 294.

Citing Articles

Risk prediction of hyperuricemia based on particle swarm fusion machine learning solely dependent on routine blood tests.

Fang M, Pan C, Yu X, Li W, Wang B, Zhou H BMC Med Inform Decis Mak. 2025; 25(1):131.

PMID: 40087711 DOI: 10.1186/s12911-025-02956-2.


Prioritizing Trust in Podiatrists' Preference for AI in Supportive Roles Over Diagnostic Roles in Health Care: Qualitative Interview and Focus Group Study.

Tahtali M, Snijders C, Dirne C, Le Blanc P JMIR Hum Factors. 2025; 12:e59010.

PMID: 39983118 PMC: 11890136. DOI: 10.2196/59010.


An Explainable AI Application (AF'fective) to Support Monitoring of Patients With Atrial Fibrillation After Catheter Ablation: Qualitative Focus Group, Design Session, and Interview Study.

She W, Siriaraya P, Iwakoshi H, Kuwahara N, Senoo K JMIR Hum Factors. 2025; 12:e65923.

PMID: 39946707 PMC: 11888073. DOI: 10.2196/65923.


Just another tool in their repertoire: uncovering insights into public and patient perspectives on clinicians' use of machine learning in perioperative care.

Gonzalez X, Steger-May K, Abraham J J Am Med Inform Assoc. 2024; 32(1):150-162.

PMID: 39401245 PMC: 11648718. DOI: 10.1093/jamia/ocae257.


Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care.

Witkowski K, Okhai R, Neely S BMC Med Ethics. 2024; 25(1):74.

PMID: 38909180 PMC: 11193174. DOI: 10.1186/s12910-024-01066-4.


References
1.
Hsieh T, Bar-Haim A, Moosa S, Ehmke N, Gripp K, Pantel J . GestaltMatcher facilitates rare disease matching using facial phenotype descriptors. Nat Genet. 2022; 54(3):349-357. PMC: 9272356. DOI: 10.1038/s41588-021-01010-x. View

2.
Calnan M, Rowe R . Researching trust relations in health care: conceptual and methodological challenges--introduction. J Health Organ Manag. 2006; 20(5):349-58. DOI: 10.1108/14777260610701759. View

3.
Diprose W, Buist N, Hua N, Thurier Q, Shand G, Robinson R . Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J Am Med Inform Assoc. 2020; 27(4):592-600. PMC: 7647292. DOI: 10.1093/jamia/ocz229. View

4.
Kerasidou C, Kerasidou A, Buscher M, Wilkinson S . Before and beyond trust: reliance in medical AI. J Med Ethics. 2021; 48(11):852-856. PMC: 9626908. DOI: 10.1136/medethics-2020-107095. View

5.
Lai M, Brian M, Mamzer M . Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med. 2020; 18(1):14. PMC: 6953249. DOI: 10.1186/s12967-019-02204-y. View