Associative Memory in Realistic Neuronal Networks
Abstract
Almost two decades ago, Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typ(cid:173) ically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to under(cid:173) stand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented.
Cite
Text
Latham. "Associative Memory in Realistic Neuronal Networks." Neural Information Processing Systems, 2001.Markdown
[Latham. "Associative Memory in Realistic Neuronal Networks." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/latham2001neurips-associative/)BibTeX
@inproceedings{latham2001neurips-associative,
title = {{Associative Memory in Realistic Neuronal Networks}},
author = {Latham, Peter E.},
booktitle = {Neural Information Processing Systems},
year = {2001},
pages = {237-244},
url = {https://mlanthology.org/neurips/2001/latham2001neurips-associative/}
}