» Articles » PMID: 38226965

A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable

Abstract

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.

Citing Articles

How predictive medicine leads to solidarity gaps in health.

Braun M NPJ Digit Med. 2025; 8(1):111.

PMID: 39966662 PMC: 11836224. DOI: 10.1038/s41746-025-01497-2.


Predicting patient reported outcome measures: a scoping review for the artificial intelligence-guided patient preference predictor.

Balch J, Chatham A, Hong P, Manganiello L, Baskaran N, Bihorac A Front Artif Intell. 2024; 7:1477447.

PMID: 39564457 PMC: 11573790. DOI: 10.3389/frai.2024.1477447.


Digital Doppelgängers and Lifespan Extension: What Matters?.

Iglesias S, Earp B, Voinea C, Porsdam Mann S, Zahiu A, Jecker N Am J Bioeth. 2024; 25(2):95-110.

PMID: 39540593 PMC: 11804783. DOI: 10.1080/15265161.2024.2416133.


Incorporating Patient Values in Large Language Model Recommendations for Surrogate and Proxy Decisions.

Nolan V, Balch J, Baskaran N, Shickel B, Efron P, Upchurch Jr G Crit Care Explor. 2024; 6(8):e1131.

PMID: 39132980 PMC: 11321752. DOI: 10.1097/CCE.0000000000001131.


Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.

Benzinger L, Epping J, Ursin F, Salloch S BMC Med Ethics. 2024; 25(1):78.

PMID: 39026308 PMC: 11256615. DOI: 10.1186/s12910-024-01079-z.


References
1.
Earp B, Demaree-Cotton J, Dunn M, Dranseika V, Everett J, Feltz A . Experimental Philosophical Bioethics. AJOB Empir Bioeth. 2020; 11(1):30-33. DOI: 10.1080/23294515.2020.1714792. View

2.
Ryan M . Discrete choice experiments in health care. BMJ. 2004; 328(7436):360-1. PMC: 341374. DOI: 10.1136/bmj.328.7436.360. View

3.
Wendler D, Wesley B, Pavlick M, Rid A . A new method for making treatment decisions for incapacitated patients: what do patients think about the use of a patient preference predictor?. J Med Ethics. 2016; 42(4):235-41. PMC: 7388033. DOI: 10.1136/medethics-2015-103001. View

4.
Allen J, Earp B, Koplin J, Wilkinson D . Consent-GPT: is it ethical to delegate procedural consent to conversational AI?. J Med Ethics. 2023; 50(2):77-83. PMC: 10850653. DOI: 10.1136/jme-2023-109347. View

5.
Ferrario A, Gloeckler S, Biller-Andorno N . Ethics of the algorithmic prediction of goal of care preferences: from theory to practice. J Med Ethics. 2022; 49(3):165-174. PMC: 9985740. DOI: 10.1136/jme-2022-108371. View