Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes
Abstract
In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors in(cid:173) jected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.
Cite
Text
Judd and Munro. "Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes." Neural Information Processing Systems, 1992.Markdown
[Judd and Munro. "Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/judd1992neurips-nets/)BibTeX
@inproceedings{judd1992neurips-nets,
title = {{Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes}},
author = {Judd, Stephen and Munro, Paul W.},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {89-96},
url = {https://mlanthology.org/neurips/1992/judd1992neurips-nets/}
}