Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation

Abstract

Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.

Cite

Text

Pineda. "Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation." Neural Computation, 1989. doi:10.1162/NECO.1989.1.2.161

Markdown

[Pineda. "Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation." Neural Computation, 1989.](https://mlanthology.org/neco/1989/pineda1989neco-recurrent/) doi:10.1162/NECO.1989.1.2.161

BibTeX

@article{pineda1989neco-recurrent,
  title     = {{Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation}},
  author    = {Pineda, Fernando J.},
  journal   = {Neural Computation},
  year      = {1989},
  pages     = {161-172},
  doi       = {10.1162/NECO.1989.1.2.161},
  volume    = {1},
  url       = {https://mlanthology.org/neco/1989/pineda1989neco-recurrent/}
}