» Articles » PMID: 20008395

Learning to Represent Visual Input

Overview
Specialty Biology
Date 2009 Dec 17
PMID 20008395
Citations 10
Authors
Affiliations
Soon will be listed here.
Abstract

One of the central problems in computational neuroscience is to understand how the object-recognition pathway of the cortex learns a deep hierarchy of nonlinear feature detectors. Recent progress in machine learning shows that it is possible to learn deep hierarchies without requiring any labelled data. The feature detectors are learned one layer at a time and the goal of the learning procedure is to form a good generative model of images, not to predict the class of each image. The learning procedure only requires the pairwise correlations between the activations of neuron-like processing units in adjacent layers. The original version of the learning procedure is derived from a quadratic 'energy' function but it can be extended to allow third-order, multiplicative interactions in which neurons gate the pairwise interactions between other neurons. A technique for factoring the third-order interactions leads to a learning module that again has a simple learning rule based on pairwise correlations. This module looks remarkably like modules that have been proposed by both biologists trying to explain the responses of neurons and engineers trying to create systems that can recognize objects.

Citing Articles

Behavioral Studies Using Large-Scale Brain Networks - Methods and Validations.

Liu M, Amey R, Backer R, Simon J, Forbes C Front Hum Neurosci. 2022; 16:875201.

PMID: 35782044 PMC: 9244405. DOI: 10.3389/fnhum.2022.875201.


Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images.

Rampun A, Jarvis D, Griffiths P, Zwiggelaar R, Scotney B, Armitage P J Imaging. 2021; 7(10).

PMID: 34677286 PMC: 8536962. DOI: 10.3390/jimaging7100200.


Processing of auditory novelty across the cortical hierarchy: An intracranial electrophysiology study.

Nourski K, Steinschneider M, Rhone A, Kawasaki H, Howard 3rd M, Banks M Neuroimage. 2018; 183:412-424.

PMID: 30114466 PMC: 6207077. DOI: 10.1016/j.neuroimage.2018.08.027.


The interplay of plasticity and adaptation in neural circuits: a generative model.

Bernacchia A Front Synaptic Neurosci. 2014; 6:26.

PMID: 25400577 PMC: 4214225. DOI: 10.3389/fnsyn.2014.00026.


Pupil fluctuations track fast switching of cortical states during quiet wakefulness.

Reimer J, Froudarakis E, Cadwell C, Yatsenko D, Denfield G, Tolias A Neuron. 2014; 84(2):355-62.

PMID: 25374359 PMC: 4323337. DOI: 10.1016/j.neuron.2014.09.033.


References
1.
Hinton G, Osindero S, Teh Y . A fast learning algorithm for deep belief nets. Neural Comput. 2006; 18(7):1527-54. DOI: 10.1162/neco.2006.18.7.1527. View

2.
Sutskever I, Hinton G . Deep, narrow sigmoid belief networks are universal approximators. Neural Comput. 2008; 20(11):2629-36. DOI: 10.1162/neco.2008.12-07-661. View

3.
Hinton G, Salakhutdinov R . Reducing the dimensionality of data with neural networks. Science. 2006; 313(5786):504-7. DOI: 10.1126/science.1127647. View

4.
Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T . Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell. 2007; 29(3):411-26. DOI: 10.1109/TPAMI.2007.56. View

5.
Fukushima K . Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980; 36(4):193-202. DOI: 10.1007/BF00344251. View