» Articles » PMID: 36826400

A Framework to Identify Ethical Concerns with ML-guided Care Workflows: a Case Study of Mortality Prediction to Guide Advance Care Planning

Overview
Date 2023 Feb 24
PMID 36826400
Authors
Affiliations
Soon will be listed here.
Abstract

Objective: Identifying ethical concerns with ML applications to healthcare (ML-HCA) before problems arise is now a stated goal of ML design oversight groups and regulatory agencies. Lack of accepted standard methodology for ethical analysis, however, presents challenges. In this case study, we evaluate use of a stakeholder "values-collision" approach to identify consequential ethical challenges associated with an ML-HCA for advanced care planning (ACP). Identification of ethical challenges could guide revision and improvement of the ML-HCA.

Materials And Methods: We conducted semistructured interviews of the designers, clinician-users, affiliated administrators, and patients, and inductive qualitative analysis of transcribed interviews using modified grounded theory.

Results: Seventeen stakeholders were interviewed. Five "values-collisions"-where stakeholders disagreed about decisions with ethical implications-were identified: (1) end-of-life workflow and how model output is introduced; (2) which stakeholders receive predictions; (3) benefit-harm trade-offs; (4) whether the ML design team has a fiduciary relationship to patients and clinicians; and, (5) how and if to protect early deployment research from external pressures, like news scrutiny, before research is completed.

Discussion: From these findings, the ML design team prioritized: (1) alternative workflow implementation strategies; (2) clarification that prediction was only evaluated for ACP need, not other mortality-related ends; and (3) shielding research from scrutiny until endpoint driven studies were completed.

Conclusion: In this case study, our ethical analysis of this ML-HCA for ACP was able to identify multiple sites of intrastakeholder disagreement that mark areas of ethical and value tension. These findings provided a useful initial ethical screening.

Citing Articles

Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices.

Maccaro A, Stokes K, Statham L, He L, Williams A, Pecchia L J Pers Med. 2024; 14(5).

PMID: 38793025 PMC: 11121798. DOI: 10.3390/jpm14050443.


Leveraging Clinical Informatics to Address the Quintuple Aim for End-of-Life Care.

Zaleski A, Thomas Craig K, Caddigan E, Yang H, Cheng Z, McNutt S AMIA Annu Symp Proc. 2024; 2023:784-793.

PMID: 38222390 PMC: 10785881.


Quantitative and qualitative methods advance the science of clinical workflow research.

Bakken S J Am Med Inform Assoc. 2023; 30(5):795-796.

PMID: 37073766 PMC: 10114099. DOI: 10.1093/jamia/ocad056.

References
1.
Rajkomar A, Hardt M, Howell M, Corrado G, Chin M . Ensuring Fairness in Machine Learning to Advance Health Equity. Ann Intern Med. 2018; 169(12):866-872. PMC: 6594166. DOI: 10.7326/M18-1990. View

2.
Solomonides A, Koski E, Atabaki S, Weinberg S, McGreevey J, Kannry J . Defining AMIA's artificial intelligence principles. J Am Med Inform Assoc. 2022; 29(4):585-591. PMC: 8922174. DOI: 10.1093/jamia/ocac006. View

3.
Abramoff M, Tobey D, Char D . Lessons Learned About Autonomous AI: Finding a Safe, Efficacious, and Ethical Path Through the Development Process. Am J Ophthalmol. 2020; 214:134-142. DOI: 10.1016/j.ajo.2020.02.022. View

4.
Irwig L, McCaffery K, Salkeld G, Bossuyt P . Informed choice for screening: implications for evaluation. BMJ. 2006; 332(7550):1148-50. PMC: 1459621. DOI: 10.1136/bmj.332.7550.1148. View

5.
Bakken S . The imperative of applying ethical perspectives to biomedical and health informatics. J Am Med Inform Assoc. 2022; 29(8):1317-1318. PMC: 9335898. DOI: 10.1093/jamia/ocac095. View