» Articles » PMID: 31101713

A Mathematical Theory of Semantic Development in Deep Neural Networks

Overview
Specialty Science
Date 2019 May 19
PMID 31101713
Citations 44
Authors
Affiliations
Soon will be listed here.
Abstract

An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: What are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.

Citing Articles

Flexible task abstractions emerge in linear networks with fast and bounded units.

Sandbrink K, Bauer J, Proca A, Saxe A, Summerfield C, Hummos A ArXiv. 2025; .

PMID: 39876939 PMC: 11774440.


Universality of representation in biological and artificial neural networks.

Hosseini E, Casto C, Zaslavsky N, Conwell C, Richardson M, Fedorenko E bioRxiv. 2025; .

PMID: 39764030 PMC: 11703180. DOI: 10.1101/2024.12.26.629294.


Object Feature Memory Is Distorted by Category Structure.

Tandoc M, Dong C, Schapiro A Open Mind (Camb). 2024; 8:1348-1368.

PMID: 39654820 PMC: 11627532. DOI: 10.1162/opmi_a_00170.


Abrupt and spontaneous strategy switches emerge in simple regularised neural networks.

Lowe A, Touzo L, Muhle-Karbe P, Saxe A, Summerfield C, Schuck N PLoS Comput Biol. 2024; 20(10):e1012505.

PMID: 39432516 PMC: 11527165. DOI: 10.1371/journal.pcbi.1012505.


Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron.

Schmid C, Murray J ArXiv. 2024; .

PMID: 39279842 PMC: 11398553.


References
1.
Raizada R, Connolly A . What makes different people's representations alike: neural similarity space solves the problem of across-subject fMRI decoding. J Cogn Neurosci. 2012; 24(4):868-77. DOI: 10.1162/jocn_a_00189. View

2.
Carlson T, Simmons R, Kriegeskorte N, Slevc L . The emergence of semantic meaning in the ventral temporal pathway. J Cogn Neurosci. 2013; 26(1):120-31. DOI: 10.1162/jocn_a_00458. View

3.
Edelman S . Representation is representation of similarities. Behav Brain Sci. 1999; 21(4):449-67; discussion 467-98. DOI: 10.1017/s0140525x98001253. View

4.
Murphy G, Medin D . The role of theories in conceptual coherence. Psychol Rev. 1985; 92(3):289-316. View

5.
Shinkareva S, Malave V, Just M, Mitchell T . Exploring commonalities across participants in the neural representation of objects. Hum Brain Mapp. 2011; 33(6):1375-83. PMC: 6870121. DOI: 10.1002/hbm.21296. View