Using Cognitive Psychology to Understand GPT-3
Overview
Authors
Affiliations
We study GPT-3, a recent large language model, using tools from cognitive psychology. More specifically, we assess GPT-3's decision-making, information search, deliberation, and causal reasoning abilities on a battery of canonical experiments from the literature. We find that much of GPT-3's behavior is impressive: It solves vignette-based tasks similarly or better than human subjects, is able to make decent decisions from descriptions, outperforms humans in a multiarmed bandit task, and shows signatures of model-based reinforcement learning. Yet, we also find that small perturbations to vignette-based tasks can lead GPT-3 vastly astray, that it shows no signatures of directed exploration, and that it fails miserably in a causal reasoning task. Taken together, these results enrich our understanding of current large language models and pave the way for future investigations using tools from cognitive psychology to study increasingly capable and opaque artificial agents.
Information Extraction from Clinical Texts with Generative Pre-trained Transformer Models.
Kim M, Chung P, Aghaeepour N, Kim N Int J Med Sci. 2025; 22(5):1015-1028.
PMID: 40027192 PMC: 11866537. DOI: 10.7150/ijms.103332.
Text understanding in GPT-4 versus humans.
Shultz T, Wise J, Nobandegani A R Soc Open Sci. 2025; 12(2):241313.
PMID: 39980841 PMC: 11840437. DOI: 10.1098/rsos.241313.
Explicitly unbiased large language models still form biased associations.
Bai X, Wang A, Sucholutsky I, Griffiths T Proc Natl Acad Sci U S A. 2025; 122(8):e2416228122.
PMID: 39977313 PMC: 11874501. DOI: 10.1073/pnas.2416228122.
Fairness identification of large language models in recommendation.
Liu W, Liu B, Qin J, Zhang X, Huang W, Wang Y Sci Rep. 2025; 15(1):5516.
PMID: 39953243 PMC: 11828922. DOI: 10.1038/s41598-025-89965-3.
Is Ockham's razor losing its edge? New perspectives on the principle of model parsimony.
Dubova M, Chandramouli S, Gigerenzer G, Grunwald P, Holmes W, Lombrozo T Proc Natl Acad Sci U S A. 2025; 122(5):e2401230121.
PMID: 39869807 PMC: 11804645. DOI: 10.1073/pnas.2401230121.