A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization

Abstract

A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network struc(cid:173) ture is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algo(cid:173) rithm supports learning time-varying features in dynamical net(cid:173) works. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.

Cite

Text

Cauwenberghs. "A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization." Neural Information Processing Systems, 1992.

Markdown

[Cauwenberghs. "A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/cauwenberghs1992neurips-fast/)

BibTeX

@inproceedings{cauwenberghs1992neurips-fast,
  title     = {{A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization}},
  author    = {Cauwenberghs, Gert},
  booktitle = {Neural Information Processing Systems},
  year      = {1992},
  pages     = {244-251},
  url       = {https://mlanthology.org/neurips/1992/cauwenberghs1992neurips-fast/}
}