Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-Winner Modern Hopfield Network
Abstract
Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model---the modern Hopfield network (MHN)---to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' abilities to reconstruct once-presented patterns organized into long presentation sequences, updating the parameters of the best-matching memory neuron (or k best neurons) as each new pattern is presented. We find that K-winner MHN's exhibit superior retention of older memories.
Cite
Text
Bhandarkar and McClelland. "Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-Winner Modern Hopfield Network." NeurIPS 2023 Workshops: AMHN, 2023.Markdown
[Bhandarkar and McClelland. "Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-Winner Modern Hopfield Network." NeurIPS 2023 Workshops: AMHN, 2023.](https://mlanthology.org/neuripsw/2023/bhandarkar2023neuripsw-sequential/)BibTeX
@inproceedings{bhandarkar2023neuripsw-sequential,
title = {{Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-Winner Modern Hopfield Network}},
author = {Bhandarkar, Shaunak and McClelland, James Lloyd},
booktitle = {NeurIPS 2023 Workshops: AMHN},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/bhandarkar2023neuripsw-sequential/}
}