An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories
Abstract
A novel variant of the familiar backpropagation-through-time approach to training recurrent networks is described. This algorithm is intended to be used on arbitrary recurrent networks that run continually without ever being reset to an initial state, and it is specifically designed for computationally efficient computer implementation. This algorithm can be viewed as a cross between epochwise backpropagation through time, which is not appropriate for continually running networks, and the widely used on-line gradient approximation technique of truncated backpropagation through time.
Cite
Text
Williams and Peng. "An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories." Neural Computation, 1990. doi:10.1162/NECO.1990.2.4.490Markdown
[Williams and Peng. "An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories." Neural Computation, 1990.](https://mlanthology.org/neco/1990/williams1990neco-efficient/) doi:10.1162/NECO.1990.2.4.490BibTeX
@article{williams1990neco-efficient,
title = {{An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories}},
author = {Williams, Ronald J. and Peng, Jing},
journal = {Neural Computation},
year = {1990},
pages = {490-501},
doi = {10.1162/NECO.1990.2.4.490},
volume = {2},
url = {https://mlanthology.org/neco/1990/williams1990neco-efficient/}
}