Random Feature Hopfield Networks Generalize Retrieval to Previously Unseen Examples

Abstract

It has been recently shown that, when an Hopfield Network stores examples generated as superposition of random features, new attractors appear in the model corresponding to such features. In this work we expand that result to superpositions of a finite number of features and we show numerically that the network remains capable of learning the features. Furthermore, we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We support this result with a simple signal-to-noise argument and we conjecture a phase diagram.

Cite

Text

Negri et al. "Random Feature Hopfield Networks Generalize Retrieval to Previously Unseen Examples." NeurIPS 2023 Workshops: AMHN, 2023.

Markdown

[Negri et al. "Random Feature Hopfield Networks Generalize Retrieval to Previously Unseen Examples." NeurIPS 2023 Workshops: AMHN, 2023.](https://mlanthology.org/neuripsw/2023/negri2023neuripsw-random/)

BibTeX

@inproceedings{negri2023neuripsw-random,
  title     = {{Random Feature Hopfield Networks Generalize Retrieval to Previously Unseen Examples}},
  author    = {Negri, Matteo and Lauditi, Clarissa and Perugini, Gabriele and Lucibello, Carlo and Malatesta, Enrico Maria},
  booktitle = {NeurIPS 2023 Workshops: AMHN},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/negri2023neuripsw-random/}
}