Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations
Overview
Affiliations
Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real-world stimuli that can potentially be leveraged to capture psychological representations. We find that state-of-the-art object classification networks provide surprisingly accurate predictions of human similarity judgments for natural images, but they fail to capture some of the structure represented by people. We show that a simple transformation that corrects these discrepancies can be obtained through convex optimization. We use the resulting representations to predict the difficulty of learning novel categories of natural images. Our results extend the scope of psychological experiments and computational modeling by enabling tractable use of large natural stimulus sets.
Perceptual Expertise and Attention: An Exploration using Deep Neural Networks.
Das S, Mangun G, Ding M bioRxiv. 2024; .
PMID: 39464001 PMC: 11507720. DOI: 10.1101/2024.10.15.617743.
Mukherjee K, Rogers T Mem Cognit. 2024; 53(1):219-241.
PMID: 38814385 DOI: 10.3758/s13421-024-01580-1.
Deep learning reveals what facial expressions mean to people in different cultures.
Brooks J, Kim L, Opara M, Keltner D, Fang X, Monroy M iScience. 2024; 27(3):109175.
PMID: 38433918 PMC: 10906517. DOI: 10.1016/j.isci.2024.109175.
TenseMusic: An automatic prediction model for musical tension.
Barchet A, Rimmele J, Pelofi C PLoS One. 2024; 19(1):e0296385.
PMID: 38241238 PMC: 10798497. DOI: 10.1371/journal.pone.0296385.
High-variability training does not enhance generalization in the prototype-distortion paradigm.
Hu M, Nosofsky R Mem Cognit. 2024; 52(5):1017-1032.
PMID: 38228994 DOI: 10.3758/s13421-023-01516-1.