Mouse Visual Cortex As a Limited Resource System That Self-learns an Ecologically-general Representation
Overview
Authors
Affiliations
Studies of the mouse visual system have revealed a variety of visual brain areas that are thought to support a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse's visual cortex, and how it supports a range of behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex and identifying key structural and functional principles underlying that model's success. Structurally, we find that a comparatively shallow network structure with a low-resolution input is optimal for modeling mouse visual cortex. Our main finding is functional-that models trained with task-agnostic, self-supervised objective functions based on the concept of contrastive embeddings are much better matches to mouse cortex, than models trained on supervised objectives or alternative self-supervised methods. This result is very much unlike in primates where prior work showed that the two were roughly equivalent, naturally leading us to ask the question of why these self-supervised objectives are better matches than supervised ones in mouse. To this end, we show that the self-supervised, contrastive objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse's limited resources to create a light-weight, general-purpose visual system-in contrast to the deep, high-resolution, and more categorization-dominated visual system of primates.
Gupta P, Dobs K PLoS Comput Biol. 2025; 21(1):e1012751.
PMID: 39869654 PMC: 11790231. DOI: 10.1371/journal.pcbi.1012751.
Decoding the brain: From neural representations to mechanistic models.
Mathis M, Perez Rotondo A, Chang E, Tolias A, Mathis A Cell. 2024; 187(21):5814-5832.
PMID: 39423801 PMC: 11637322. DOI: 10.1016/j.cell.2024.08.051.
Signatures of hierarchical temporal processing in the mouse visual system.
Rudelt L, Gonzalez Marx D, Spitzner F, Cramer B, Zierenberg J, Priesemann V PLoS Comput Biol. 2024; 20(8):e1012355.
PMID: 39173067 PMC: 11373856. DOI: 10.1371/journal.pcbi.1012355.
Fu J, Pierzchlewicz P, Willeke K, Bashiri M, Muhammad T, Diamantaki M Cell Rep. 2024; 43(8):114639.
PMID: 39167488 PMC: 11463840. DOI: 10.1016/j.celrep.2024.114639.
Decoding dynamic visual scenes across the brain hierarchy.
Chen Y, Beech P, Yin Z, Jia S, Zhang J, Yu Z PLoS Comput Biol. 2024; 20(8):e1012297.
PMID: 39093861 PMC: 11324145. DOI: 10.1371/journal.pcbi.1012297.