Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance, Generalization and Learning Trajectory
Abstract
We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost func(cid:173) tion to include noise-mediated penalty terms. Predictions are made in the light of these calculations which suggest that fault tolerance, generalisation ability and learning trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The re(cid:173) sults appear to be perfectly general for all training schemes where weights are adjusted incrementally, and have wide-ranging implica(cid:173) tions for all applications, particularly those involving "inaccurate" analog neural VLSI.
Cite
Text
Murray and Edwards. "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance, Generalization and Learning Trajectory." Neural Information Processing Systems, 1992.Markdown
[Murray and Edwards. "Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance, Generalization and Learning Trajectory." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/murray1992neurips-synaptic/)BibTeX
@inproceedings{murray1992neurips-synaptic,
title = {{Synaptic Weight Noise During MLP Learning Enhances Fault-Tolerance, Generalization and Learning Trajectory}},
author = {Murray, Alan F. and Edwards, Peter J.},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {491-498},
url = {https://mlanthology.org/neurips/1992/murray1992neurips-synaptic/}
}