A Generative Model for Attractor Dynamics

Abstract

Attractor networks, which map an input space to a discrete out(cid:173) put space, are useful for pattern completion. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious afuac(cid:173) tors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encod(cid:173) ing of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attract or net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters.

Cite

Text

Zemel and Mozer. "A Generative Model for Attractor Dynamics." Neural Information Processing Systems, 1999.

Markdown

[Zemel and Mozer. "A Generative Model for Attractor Dynamics." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/zemel1999neurips-generative/)

BibTeX

@inproceedings{zemel1999neurips-generative,
  title     = {{A Generative Model for Attractor Dynamics}},
  author    = {Zemel, Richard S. and Mozer, Michael},
  booktitle = {Neural Information Processing Systems},
  year      = {1999},
  pages     = {80-88},
  url       = {https://mlanthology.org/neurips/1999/zemel1999neurips-generative/}
}