Hebb Learning of Features Based on Their Information Content
Abstract
This paper investigates the stationary points of a Hebb learning rule with a sigmoid nonlinearity in it. We show mathematically that when the input has a low information content, as measured by the input's variance, this learning rule suppresses learning, that is, forces the weight vector to converge to the zero vector. When the information content exceeds a certain value, the rule will automatically begin to learn a feature in the input. Our analysis suggests that under certain conditions it is the first principal component that is learned. The weight vector length remains bounded, provided the variance of the input is finite. Simulations confirm the theoretical results derived.
Cite
Text
Peper and Noda. "Hebb Learning of Features Based on Their Information Content." Neural Information Processing Systems, 1996.Markdown
[Peper and Noda. "Hebb Learning of Features Based on Their Information Content." Neural Information Processing Systems, 1996.](https://mlanthology.org/neurips/1996/peper1996neurips-hebb/)BibTeX
@inproceedings{peper1996neurips-hebb,
title = {{Hebb Learning of Features Based on Their Information Content}},
author = {Peper, Ferdinand and Noda, Hideki},
booktitle = {Neural Information Processing Systems},
year = {1996},
pages = {246-252},
url = {https://mlanthology.org/neurips/1996/peper1996neurips-hebb/}
}