» Articles » PMID: 38508911

Dissociating Language and Thought in Large Language Models

Overview
Journal Trends Cogn Sci
Date 2024 Mar 20
PMID 38508911
Authors
Affiliations
Soon will be listed here.
Abstract

Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules and patterns) and functional linguistic competence (understanding and using language in the world). We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of separate mechanisms specialized for formal versus functional linguistic competence.

Citing Articles

Evidence-Based Analysis of AI Chatbots in Oncology Patient Education: Implications for Trust, Perceived Realness, and Misinformation Management.

Lawson McLean A, Hristidis V J Cancer Educ. 2025; .

PMID: 39964607 DOI: 10.1007/s13187-025-02592-4.


Neuron signal attenuation activation mechanism for deep learning.

Jiang W, Yuan H, Liu W Patterns (N Y). 2025; 6(1):101117.

PMID: 39896257 PMC: 11783890. DOI: 10.1016/j.patter.2024.101117.


Divergences between Language Models and Human Brains.

Zhou Y, Liu E, Neubig G, Tarr M, Wehbe L ArXiv. 2025; .

PMID: 39876931 PMC: 11774444.


Fostering effective hybrid human-LLM reasoning and decision making.

Passerini A, Gema A, Minervini P, Sayin B, Tentori K Front Artif Intell. 2025; 7():1464690.

PMID: 39845098 PMC: 11751230. DOI: 10.3389/frai.2024.1464690.


Universality of representation in biological and artificial neural networks.

Hosseini E, Casto C, Zaslavsky N, Conwell C, Richardson M, Fedorenko E bioRxiv. 2025; .

PMID: 39764030 PMC: 11703180. DOI: 10.1101/2024.12.26.629294.


References
1.
Fedorenko E, Behr M, Kanwisher N . Functional specificity for high-level linguistic processing in the human brain. Proc Natl Acad Sci U S A. 2011; 108(39):16428-33. PMC: 3182706. DOI: 10.1073/pnas.1112937108. View

2.
Fedorenko E, Scott T, Brunner P, Coon W, Pritchett B, Schalk G . Neural correlate of the construction of sentence meaning. Proc Natl Acad Sci U S A. 2016; 113(41):E6256-E6262. PMC: 5068329. DOI: 10.1073/pnas.1612132113. View

3.
Grand G, Blank I, Pereira F, Fedorenko E . Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nat Hum Behav. 2022; 6(7):975-987. PMC: 10349641. DOI: 10.1038/s41562-022-01316-8. View

4.
Saffran J, Aslin R, Newport E . Statistical learning by 8-month-old infants. Science. 1996; 274(5294):1926-8. DOI: 10.1126/science.274.5294.1926. View

5.
Yiu E, Kosoy E, Gopnik A . Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet). Perspect Psychol Sci. 2023; 19(5):874-883. PMC: 11373165. DOI: 10.1177/17456916231201401. View