Optimal Signalling in Attractor Neural Networks
Abstract
In [Meilijson and Ruppin, 1993] we presented a methodological framework describing the two-iteration performance of Hopfield(cid:173) like attractor neural networks with history-dependent, Bayesian dynamics. We now extend this analysis in a number of directions: input patterns applied to small subsets of neurons, general con(cid:173) nectivity architectures and more efficient use of history. We show that the optimal signal (activation) function has a slanted sigmQidal shape, and provide an intuitive account of activation functions with a non-monotone shape. This function endows the model with some properties characteristic of cortical neurons' firing.
Cite
Text
Meilijson and Ruppin. "Optimal Signalling in Attractor Neural Networks." Neural Information Processing Systems, 1993.Markdown
[Meilijson and Ruppin. "Optimal Signalling in Attractor Neural Networks." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/meilijson1993neurips-optimal/)BibTeX
@inproceedings{meilijson1993neurips-optimal,
title = {{Optimal Signalling in Attractor Neural Networks}},
author = {Meilijson, Isaac and Ruppin, Eytan},
booktitle = {Neural Information Processing Systems},
year = {1993},
pages = {485-492},
url = {https://mlanthology.org/neurips/1993/meilijson1993neurips-optimal/}
}