Using Additive Noise in Back-propagation Training
Overview
Authors
Affiliations
The possibility of improving the generalization capability of a neural network by introducing additive noise to the training samples is discussed. The network considered is a feedforward layered neural network trained with the back-propagation algorithm. Back-propagation training is viewed as nonlinear least-squares regression and the additive noise is interpreted as generating a kernel estimate of the probability density that describes the training vector distribution. Two specific application types are considered: pattern classifier networks and estimation of a nonstochastic mapping from data corrupted by measurement errors. It is not proved that the introduction of additive noise to the training vectors always improves network generalization. However, the analysis suggests mathematically justified rules for choosing the characteristics of noise if additive noise is used in training. Results of mathematical statistics are used to establish various asymptotic consistency results for the proposed method. Numerical simulations support the applicability of the training method.
Qi Y, Wang X, Qin L Brief Bioinform. 2025; 26(2).
PMID: 40072846 PMC: 11899567. DOI: 10.1093/bib/bbaf097.
Qi Y, Wang X, Qin L ArXiv. 2024; .
PMID: 39314504 PMC: 11419172.
Loss of plasticity in deep continual learning.
Dohare S, Hernandez-Garcia J, Lan Q, Rahman P, Mahmood A, Sutton R Nature. 2024; 632(8026):768-774.
PMID: 39169245 PMC: 11338828. DOI: 10.1038/s41586-024-07711-7.
Chang K, Karthikesh M, Zhu Y, Hudson H, Barbay S, Bundy D J Biophotonics. 2024; 17(3):e202300347.
PMID: 38171947 PMC: 10961203. DOI: 10.1002/jbio.202300347.
A machine learning approach for efficient multi-dimensional integration.
Yoon B Sci Rep. 2021; 11(1):18965.
PMID: 34556754 PMC: 8460840. DOI: 10.1038/s41598-021-98392-z.