» Articles » PMID: 39313595

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

Abstract

Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare.

Citing Articles

Open-Source Large Language Models in Radiology: A Review and Tutorial for Practical Research and Clinical Deployment.

Savage C, Kanhere A, Parekh V, Langlotz C, Joshi A, Huang H Radiology. 2025; 314(1):e241073.

PMID: 39873598 PMC: 11783163. DOI: 10.1148/radiol.241073.


Describing the Framework for AI Tool Assessment in Mental Health and Applying It to a Generative AI Obsessive-Compulsive Disorder Platform: Tutorial.

Golden A, Aboujaoude E JMIR Form Res. 2024; 8:e62963.

PMID: 39423001 PMC: 11530715. DOI: 10.2196/62963.

References
1.
Clusmann J, Kolbinger F, Muti H, Carrero Z, Eckardt J, Laleh N . The future landscape of large language models in medicine. Commun Med (Lond). 2023; 3(1):141. PMC: 10564921. DOI: 10.1038/s43856-023-00370-1. View

2.
Omiye J, Gui H, Rezaei S, Zou J, Daneshjou R . Large Language Models in Medicine: The Potentials and Pitfalls : A Narrative Review. Ann Intern Med. 2024; 177(2):210-220. DOI: 10.7326/M23-2772. View

3.
Singhal K, Azizi S, Tu T, Mahdavi S, Wei J, Chung H . Large language models encode clinical knowledge. Nature. 2023; 620(7972):172-180. PMC: 10396962. DOI: 10.1038/s41586-023-06291-2. View

4.
Yang X, Chen A, PourNejatian N, Shin H, Smith K, Parisien C . A large language model for electronic health records. NPJ Digit Med. 2022; 5(1):194. PMC: 9792464. DOI: 10.1038/s41746-022-00742-2. View

5.
Kanjee Z, Crowe B, Rodman A . Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge. JAMA. 2023; 330(1):78-80. PMC: 10273128. DOI: 10.1001/jama.2023.8288. View