A Convergence Result for Learning in Recurrent Neural Networks
Abstract
We give a rigorous analysis of the convergence properties of a backpropagation algorithm for recurrent networks containing either output or hidden layer recurrence. The conditions permit data generated by stochastic processes with considerable dependence. Restrictions are offered that may help assure convergence of the network parameters to a local optimum, as some simulations illustrate.
Cite
Text
Kuan et al. "A Convergence Result for Learning in Recurrent Neural Networks." Neural Computation, 1994. doi:10.1162/NECO.1994.6.3.420Markdown
[Kuan et al. "A Convergence Result for Learning in Recurrent Neural Networks." Neural Computation, 1994.](https://mlanthology.org/neco/1994/kuan1994neco-convergence/) doi:10.1162/NECO.1994.6.3.420BibTeX
@article{kuan1994neco-convergence,
title = {{A Convergence Result for Learning in Recurrent Neural Networks}},
author = {Kuan, Chung-Ming and Hornik, Kurt and White, Halbert},
journal = {Neural Computation},
year = {1994},
pages = {420-440},
doi = {10.1162/NECO.1994.6.3.420},
volume = {6},
url = {https://mlanthology.org/neco/1994/kuan1994neco-convergence/}
}