History-Dependent Attractor Neural Networks
Abstract
We present a methodological framework enabling a detailed de(cid:173) scription of the performance of Hopfield-like attractor neural net(cid:173) works (ANN) in the first two iterations. Using the Bayesian ap(cid:173) proach, we find that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given itera(cid:173) tion) on the basis of the magnitude of their post-synaptic poten(cid:173) tials. The contribution of biologically plausible, censored, history(cid:173) dependent dynamics is especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance at(cid:173) tained is higher than the performance of two 'independent' iter(cid:173) ations, which represents an upper bound on the performance of history-independent networks.
Cite
Text
Meilijson and Ruppin. "History-Dependent Attractor Neural Networks." Neural Information Processing Systems, 1992.Markdown
[Meilijson and Ruppin. "History-Dependent Attractor Neural Networks." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/meilijson1992neurips-historydependent/)BibTeX
@inproceedings{meilijson1992neurips-historydependent,
title = {{History-Dependent Attractor Neural Networks}},
author = {Meilijson, Isaac and Ruppin, Eytan},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {572-579},
url = {https://mlanthology.org/neurips/1992/meilijson1992neurips-historydependent/}
}