» Articles » PMID: 25657618

Reducing the Computational Footprint for Real-time BCPNN Learning

Overview
Journal Front Neurosci
Date 2015 Feb 7
PMID 25657618
Citations 3
Authors
Affiliations
Soon will be listed here.
Abstract

The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

Citing Articles

Mapping the BCPNN Learning Rule to a Memristor Model.

Wang D, Xu J, Stathis D, Zhang L, Li F, Lansner A Front Neurosci. 2021; 15:750458.

PMID: 34955716 PMC: 8695980. DOI: 10.3389/fnins.2021.750458.


Optimizing BCPNN Learning Rule for Memory Access.

Yang Y, Stathis D, Jordao R, Hemani A, Lansner A Front Neurosci. 2020; 14:878.

PMID: 32982673 PMC: 7487417. DOI: 10.3389/fnins.2020.00878.


Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.

Knight J, Tully P, Kaplan B, Lansner A, Furber S Front Neuroanat. 2016; 10:37.

PMID: 27092061 PMC: 4823276. DOI: 10.3389/fnana.2016.00037.

References
1.
Ros E, Carrillo R, Ortigosa E, Barbour B, Agis R . Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics. Neural Comput. 2006; 18(12):2959-93. DOI: 10.1162/neco.2006.18.12.2959. View

2.
Tully P, Hennig M, Lansner A . Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci. 2014; 6:8. PMC: 3986567. DOI: 10.3389/fnsyn.2014.00008. View

3.
Lundqvist M, Herman P, Lansner A . Effect of prestimulus alpha power, phase, and synchronization on stimulus detection rates in a biophysical attractor network model. J Neurosci. 2013; 33(29):11817-24. PMC: 3722510. DOI: 10.1523/JNEUROSCI.5155-12.2013. View

4.
Bate A, Lindquist M, Edwards I, Olsson S, Orre R, Lansner A . A Bayesian neural network method for adverse drug reaction signal generation. Eur J Clin Pharmacol. 1998; 54(4):315-21. DOI: 10.1007/s002280050466. View

5.
Lundqvist M, Herman P, Lansner A . Theta and gamma power increases and alpha/beta power decreases with memory load in an attractor network model. J Cogn Neurosci. 2011; 23(10):3008-20. DOI: 10.1162/jocn_a_00029. View