Induction of Finite-State Languages Using Second-Order Recurrent Networks
Abstract
Second-order recurrent networks that recognize simple finite state languages over 0,1* are induced from positive and negative examples. Using the complete gradient of the recurrent network and sufficient training examples to constrain the definition of the language to be induced, solutions are obtained that correctly recognize strings of arbitrary length.
Cite
Text
Watrous and Kuhn. "Induction of Finite-State Languages Using Second-Order Recurrent Networks." Neural Computation, 1992. doi:10.1162/NECO.1992.4.3.406Markdown
[Watrous and Kuhn. "Induction of Finite-State Languages Using Second-Order Recurrent Networks." Neural Computation, 1992.](https://mlanthology.org/neco/1992/watrous1992neco-induction/) doi:10.1162/NECO.1992.4.3.406BibTeX
@article{watrous1992neco-induction,
title = {{Induction of Finite-State Languages Using Second-Order Recurrent Networks}},
author = {Watrous, Raymond L. and Kuhn, Gary M.},
journal = {Neural Computation},
year = {1992},
pages = {406-414},
doi = {10.1162/NECO.1992.4.3.406},
volume = {4},
url = {https://mlanthology.org/neco/1992/watrous1992neco-induction/}
}