Learning Continuous Attractors in Recurrent Networks
Abstract
One approach to invariant object recognition employs a recurrent neu(cid:173) ral network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing in(cid:173) formation, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical view(cid:173) point, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation.
Cite
Text
Seung. "Learning Continuous Attractors in Recurrent Networks." Neural Information Processing Systems, 1997.Markdown
[Seung. "Learning Continuous Attractors in Recurrent Networks." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/seung1997neurips-learning/)BibTeX
@inproceedings{seung1997neurips-learning,
title = {{Learning Continuous Attractors in Recurrent Networks}},
author = {Seung, H. Sebastian},
booktitle = {Neural Information Processing Systems},
year = {1997},
pages = {654-660},
url = {https://mlanthology.org/neurips/1997/seung1997neurips-learning/}
}