Training Second-Order Recurrent Neural Networks Using Hints
Abstract
We investigate a method for inserting rules into discrete-time second-order recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finite-state automaton. Inserting these rules as hints into networks with second-order connections is straightforward. Our simulation results show that even weak hints seem to improve the convergence time by an order of magnitude.
Cite
Text
Omlin and Giles. "Training Second-Order Recurrent Neural Networks Using Hints." International Conference on Machine Learning, 1992. doi:10.1016/B978-1-55860-247-2.50051-6Markdown
[Omlin and Giles. "Training Second-Order Recurrent Neural Networks Using Hints." International Conference on Machine Learning, 1992.](https://mlanthology.org/icml/1992/omlin1992icml-training/) doi:10.1016/B978-1-55860-247-2.50051-6BibTeX
@inproceedings{omlin1992icml-training,
title = {{Training Second-Order Recurrent Neural Networks Using Hints}},
author = {Omlin, Christian W. and Giles, C. Lee},
booktitle = {International Conference on Machine Learning},
year = {1992},
pages = {361-366},
doi = {10.1016/B978-1-55860-247-2.50051-6},
url = {https://mlanthology.org/icml/1992/omlin1992icml-training/}
}