Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

Abstract

An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework , both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.

Cite

Text

West and Saad. "Adaptive Back-Propagation in On-Line Learning of Multilayer Networks." Neural Information Processing Systems, 1995.

Markdown

[West and Saad. "Adaptive Back-Propagation in On-Line Learning of Multilayer Networks." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/west1995neurips-adaptive/)

BibTeX

@inproceedings{west1995neurips-adaptive,
  title     = {{Adaptive Back-Propagation in On-Line Learning of Multilayer Networks}},
  author    = {West, Ansgar H. L. and Saad, David},
  booktitle = {Neural Information Processing Systems},
  year      = {1995},
  pages     = {323-329},
  url       = {https://mlanthology.org/neurips/1995/west1995neurips-adaptive/}
}