Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting
Abstract
Recently researchers have derived formal complexity analysis of analog computation in the setting of discrete-time dynamical systems. As an empirical constrast, training recurrent neural networks (RNNs) produces self -organized systems that are realizations of analog mechanisms. Pre(cid:173) vious work showed that a RNN can learn to process a simple context-free language (CFL) by counting. Herein, we extend that work to show that a RNN can learn a harder CFL, a simple palindrome, by organizing its re(cid:173) sources into a symbol-sensitive counting solution, and we provide a dy(cid:173) namical systems analysis which demonstrates how the network: can not only count, but also copy and store counting infonnation.
Cite
Text
Rodriguez and Wiles. "Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting." Neural Information Processing Systems, 1997.Markdown
[Rodriguez and Wiles. "Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/rodriguez1997neurips-recurrent/)BibTeX
@inproceedings{rodriguez1997neurips-recurrent,
title = {{Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting}},
author = {Rodriguez, Paul and Wiles, Janet},
booktitle = {Neural Information Processing Systems},
year = {1997},
pages = {87-93},
url = {https://mlanthology.org/neurips/1997/rodriguez1997neurips-recurrent/}
}