» Articles » PMID: 34630062

THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks

Overview
Specialty Neurology
Date 2021 Oct 11
PMID 34630062
Citations 12
Authors
Affiliations
Soon will be listed here.
Abstract

Over the past decade, deep neural network (DNN) models have received a lot of attention due to their near-human object classification performance and their excellent prediction of signals recorded from biological visual systems. To better understand the function of these networks and relate them to hypotheses about brain activity and behavior, researchers need to extract the activations to images across different DNN layers. The abundance of different DNN variants, however, can often be unwieldy, and the task of extracting DNN activations from different layers may be non-trivial and error-prone for someone without a strong computational background. Thus, researchers in the fields of cognitive science and computational neuroscience would benefit from a library or package that supports a user in the extraction task. THINGSvision is a new Python module that aims at closing this gap by providing a simple and unified tool for extracting layer activations for a wide range of pretrained and randomly-initialized neural network architectures, even for users with little to no programming experience. We demonstrate the general utility of THINGsvision by relating extracted DNN activations to a number of functional MRI and behavioral datasets using representational similarity analysis, which can be performed as an integral part of the toolbox. Together, THINGSvision enables researchers across diverse fields to extract features in a streamlined manner for their custom image dataset, thereby improving the ease of relating DNNs, brain activity, and behavior, and improving the reproducibility of findings in these research fields.

Citing Articles

The Scope and Limits of Fine-Grained Image and Category Information in the Ventral Visual Pathway.

Badwal M, Bergmann J, Roth J, Roth J, Doeller C, Hebart M J Neurosci. 2024; 45(3.

PMID: 39505406 PMC: 11735656. DOI: 10.1523/JNEUROSCI.0936-24.2024.


Maintenance and transformation of representational formats during working memory prioritization.

Pacheco-Estefan D, Fellner M, Kunz L, Zhang H, Reinacher P, Roy C Nat Commun. 2024; 15(1):8234.

PMID: 39300141 PMC: 11412997. DOI: 10.1038/s41467-024-52541-w.


Distributed representations of behaviour-derived object dimensions in the human visual system.

Contier O, Baker C, Hebart M Nat Hum Behav. 2024; 8(11):2179-2193.

PMID: 39251723 PMC: 11576512. DOI: 10.1038/s41562-024-01980-y.


Fine-grained knowledge about manipulable objects is well-predicted by contrastive language image pre-training.

Walbrin J, Sossounov N, Mahdiani M, Vaz I, Almeida J iScience. 2024; 27(7):110297.

PMID: 39040066 PMC: 11261149. DOI: 10.1016/j.isci.2024.110297.


Graspable foods and tools elicit similar responses in visual cortex.

Ritchie J, Andrews S, Vaziri-Pashkam M, Baker C bioRxiv. 2024; .

PMID: 38529495 PMC: 10962699. DOI: 10.1101/2024.02.20.581258.


References
1.
Schrimpf M, Kubilius J, Lee M, Ratan Murty N, Ajemian R, DiCarlo J . Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence. Neuron. 2020; 108(3):413-423. DOI: 10.1016/j.neuron.2020.07.040. View

2.
Kriegeskorte N, Mur M, Bandettini P . Representational similarity analysis - connecting the branches of systems neuroscience. Front Syst Neurosci. 2008; 2:4. PMC: 2605405. DOI: 10.3389/neuro.06.004.2008. View

3.
Mehrer J, Spoerer C, Jones E, Kriegeskorte N, Kietzmann T . An ecologically motivated image dataset for deep learning yields better models of human vision. Proc Natl Acad Sci U S A. 2021; 118(8). PMC: 7923360. DOI: 10.1073/pnas.2011417118. View

4.
King M, Groen I, Steel A, Kravitz D, Baker C . Similarity judgments and cortical visual responses reflect different properties of object and scene categories in naturalistic images. Neuroimage. 2019; 197:368-382. PMC: 6591094. DOI: 10.1016/j.neuroimage.2019.04.079. View

5.
Hebart M, Dickter A, Kidder A, Kwok W, Corriveau A, Van Wicklin C . THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images. PLoS One. 2019; 14(10):e0223792. PMC: 6793944. DOI: 10.1371/journal.pone.0223792. View