» Articles » PMID: 39671421

Neural Networks with Optimized Single-neuron Adaptation Uncover Biologically Plausible Regularization

Overview
Specialty Biology
Date 2024 Dec 13
PMID 39671421
Authors
Affiliations
Soon will be listed here.
Abstract

Neurons in the brain have rich and adaptive input-output properties. Features such as heterogeneous f-I curves and spike frequency adaptation are known to place single neurons in optimal coding regimes when facing changing stimuli. Yet, it is still unclear how brain circuits exploit single-neuron flexibility, and how network-level requirements may have shaped such cellular function. To answer this question, a multi-scaled approach is needed where the computations of single neurons and neural circuits must be considered as a complete system. In this work, we use artificial neural networks to systematically investigate single-neuron input-output adaptive mechanisms, optimized in an end-to-end fashion. Throughout the optimization process, each neuron has the liberty to modify its nonlinear activation function parametrized to mimic f-I curves of biological neurons, either by learning an individual static function or via a learned and shared adaptation mechanism to modify activation functions in real-time during a task. We find that such adaptive networks show much-improved robustness to noise and changes in input statistics. Using tools from dynamical systems theory, we analyze the role of these emergent single-neuron properties and argue that neural diversity and adaptation play an active regularization role, enabling neural circuits to optimally propagate information across time. Finally, we outline similarities between these optimized solutions and known coding strategies found in biological neurons, such as gain scaling and fractional order differentiation/integration.

References
1.
Salaj D, Subramoney A, Kraisnikovic C, Bellec G, Legenstein R, Maass W . Spike frequency adaptation supports network computations on temporally dispersed information. Elife. 2021; 10. PMC: 8313230. DOI: 10.7554/eLife.65459. View

2.
Yamins D, DiCarlo J . Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci. 2016; 19(3):356-65. DOI: 10.1038/nn.4244. View

3.
Boedecker J, Obst O, Lizier J, Mayer N, Asada M . Information processing in echo state networks at the edge of chaos. Theory Biosci. 2011; 131(3):205-13. DOI: 10.1007/s12064-011-0146-8. View

4.
Laughlin S . A simple coding procedure enhances a neuron's information capacity. Z Naturforsch C Biosci. 1981; 36(9-10):910-2. View

5.
Bertschinger N, Natschlager T . Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput. 2004; 16(7):1413-36. DOI: 10.1162/089976604323057443. View