Convergence Properties of Learning in ART1
Abstract
We consider the ART1 neural network architecture. It is shown that in the fast learning case, an ART1 network that is repeatedly presented with an arbitrary list of binary input patterns, self-stabilizes the recognition code of every size-l pattern in at most l list presentations.
Cite
Text
Georgiopoulos et al. "Convergence Properties of Learning in ART1." Neural Computation, 1990. doi:10.1162/NECO.1990.2.4.502Markdown
[Georgiopoulos et al. "Convergence Properties of Learning in ART1." Neural Computation, 1990.](https://mlanthology.org/neco/1990/georgiopoulos1990neco-convergence/) doi:10.1162/NECO.1990.2.4.502BibTeX
@article{georgiopoulos1990neco-convergence,
title = {{Convergence Properties of Learning in ART1}},
author = {Georgiopoulos, Michael and Heileman, Gregory L. and Huang, Juxin},
journal = {Neural Computation},
year = {1990},
pages = {502-509},
doi = {10.1162/NECO.1990.2.4.502},
volume = {2},
url = {https://mlanthology.org/neco/1990/georgiopoulos1990neco-convergence/}
}