» Articles » PMID: 30508424

Ensuring Fairness in Machine Learning to Advance Health Equity

Overview
Journal Ann Intern Med
Specialty General Medicine
Date 2018 Dec 4
PMID 30508424
Citations 256
Authors
Affiliations
Soon will be listed here.
Abstract

Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human and structural biases in the past-called protected groups-are vulnerable to harm by incorrect predictions or withholding of resources. This article describes how model design, biases in data, and the interactions of model predictions with clinicians and patients may exacerbate health care disparities. Rather than simply guarding against these harms passively, machine-learning systems should be used proactively to advance health equity. For that goal to be achieved, principles of distributive justice must be incorporated into model design, deployment, and evaluation. The article describes several technical implementations of distributive justice-specifically those that ensure equality in patient outcomes, performance, and resource allocation-and guides clinicians as to when they should prioritize each principle. Machine learning is providing increasingly sophisticated decision support and population-level monitoring, and it should encode principles of justice to ensure that models benefit all patients.

Citing Articles

Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity.

Modise L, Alborzi Avanaki M, Ameen S, Celi L, Chen V, Cordes A PLOS Digit Health. 2025; 4(3):e0000495.

PMID: 40036250 PMC: 11878906. DOI: 10.1371/journal.pdig.0000495.


Assessing Algorithm Fairness Requires Adjustment for Risk Distribution Differences: Re-Considering the Equal Opportunity Criterion.

Hegarty S, Linn K, Zhang H, Teeple S, Albert P, Parikh R medRxiv. 2025; .

PMID: 39974139 PMC: 11838655. DOI: 10.1101/2025.01.31.25321489.


Factors influencing trust in algorithmic decision-making: an indirect scenario-based experiment.

Marmolejo-Ramos F, Marrone R, Korolkiewicz M, Gabriel F, Siemens G, Joksimovic S Front Artif Intell. 2025; 7:1465605.

PMID: 39968162 PMC: 11832472. DOI: 10.3389/frai.2024.1465605.


Factors underpinning the performance of implemented artificial intelligence-based patient deterioration prediction systems: reasons for selection and implications for hospitals and researchers.

van der Vegt A, Campbell V, Wang S, Malycha J, Scott I J Am Med Inform Assoc. 2025; 32(3):492-509.

PMID: 39963969 PMC: 11833469. DOI: 10.1093/jamia/ocae321.


Harnessing the Potential of Artificial Intelligence in Yoga Therapy.

Sinha N, Sinha R Int J Yoga. 2025; 17(3):242-245.

PMID: 39959510 PMC: 11823551. DOI: 10.4103/ijoy.ijoy_124_24.


References
1.
Veinot T, Mitchell H, Ancker J . Good intentions are not enough: how informatics interventions can worsen inequality. J Am Med Inform Assoc. 2018; 25(8):1080-1088. PMC: 7646885. DOI: 10.1093/jamia/ocy052. View

2.
Portela M, Pronovost P, Woodcock T, Carter P, Dixon-Woods M . How to study improvement interventions: a brief overview of possible study types. BMJ Qual Saf. 2015; 24(5):325-36. PMC: 4413733. DOI: 10.1136/bmjqs-2014-003620. View

3.
Beam A, Kohane I . Big Data and Machine Learning in Health Care. JAMA. 2018; 319(13):1317-1318. DOI: 10.1001/jama.2017.18391. View

4.
Cook N . Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation. 2007; 115(7):928-35. DOI: 10.1161/CIRCULATIONAHA.106.672402. View

5.
Bates D, Saria S, Ohno-Machado L, Shah A, Escobar G . Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff (Millwood). 2014; 33(7):1123-31. DOI: 10.1377/hlthaff.2014.0041. View