» Articles » PMID: 38593424

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz's Theory of Basic Values

Overview
Date 2024 Apr 9
PMID 38593424
Authors
Affiliations
Soon will be listed here.
Abstract

Background: Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz's theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics.

Objective: This study aimed to (1) evaluate whether the STBV can measure value-like constructs within leading LLMs and (2) determine whether LLMs exhibit distinct value-like patterns from humans and each other.

Methods: In total, 4 LLMs (Bard, Claude 2, Generative Pretrained Transformer [GPT]-3.5, GPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess value-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs' value profiles, their results were compared to published data from a diverse sample of 53,472 individuals across 49 nations who had completed the PVQ-RR. This allowed us to assess whether the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests.

Results: The PVQ-RR showed good reliability and validity for quantifying value-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs' value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction, while de-emphasizing achievement, power, and security relative to humans. Successful discriminant analysis differentiated the 4 LLMs' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs' responses when presented with mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational value-like constructs that shape their decision-making.

Conclusions: This study leveraged the STBV to map the motivational value-like infrastructure underpinning leading LLMs. Although the study demonstrated the STBV can effectively characterize value-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the LLMs underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental health care must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.

Citing Articles

The Feasibility of Large Language Models in Verbal Comprehension Assessment: Mixed Methods Feasibility Study.

Hadar-Shoval D, Lvovsky M, Asraf K, Shimoni Y, Elyoseph Z JMIR Form Res. 2025; 9:e68347.

PMID: 39993720 PMC: 11894350. DOI: 10.2196/68347.


The externalization of internal experiences in psychotherapy through generative artificial intelligence: a theoretical, clinical, and ethical analysis.

Haber Y, Hadar Shoval D, Levkovich I, Yinon D, Gigi K, Pen O Front Digit Health. 2025; 7:1512273.

PMID: 39968063 PMC: 11832678. DOI: 10.3389/fdgth.2025.1512273.


Responsible Design, Integration, and Use of Generative AI in Mental Health.

Asman O, Torous J, Tal A JMIR Ment Health. 2025; 12:e70439.

PMID: 39864170 PMC: 11769776. DOI: 10.2196/70439.


An Ethical Perspective on the Democratization of Mental Health With Generative AI.

Elyoseph Z, Gur T, Haber Y, Simon T, Angert T, Navon Y JMIR Ment Health. 2024; 11:e58011.

PMID: 39417792 PMC: 11500620. DOI: 10.2196/58011.


The use of Artificial Intelligence in Psychotherapy: Practical and Ethical Aspects.

Ozden H Turk Psikiyatri Derg. 2024; .

PMID: 39399811 PMC: 11681265. DOI: 10.5080/u27603.


References
1.
Maercker A, Zhang X, Gao Z, Kochetkov Y, Lu S, Sang Z . Personal value orientations as mediated predictors of mental health: A three-culture study of Chinese, Russian, and German university students. Int J Clin Health Psychol. 2018; 15(1):8-17. PMC: 6224790. DOI: 10.1016/j.ijchp.2014.06.001. View

2.
Schwartz S, Cieciuch J . Measuring the Refined Theory of Individual Values in 49 Cultural Groups: Psychometrics of the Revised Portrait Value Questionnaire. Assessment. 2021; 29(5):1005-1019. PMC: 9131418. DOI: 10.1177/1073191121998760. View

3.
Grodniewicz J, Hohol M . Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence. Front Psychiatry. 2023; 14:1190084. PMC: 10267322. DOI: 10.3389/fpsyt.2023.1190084. View

4.
Yang L, Kleinman A, Link B, Phelan J, Lee S, Good B . Culture and stigma: adding moral experience to stigma theory. Soc Sci Med. 2006; 64(7):1524-35. DOI: 10.1016/j.socscimed.2006.11.013. View

5.
Sedlakova J, Trachsel M . Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?. Am J Bioeth. 2022; 23(5):4-13. DOI: 10.1080/15265161.2022.2048739. View