Efficient Parallel Learning Algorithms for Neural Networks
Abstract
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having supe(cid:173) rior convergence properties, optimization techniques such as the Polak(cid:173) Ribiere method are also significantly more efficient than the Back(cid:173) propagation algorithm. These results are based on experiments per(cid:173) formed on small boolean learning problems and the noisy real-valued learning problem of hand-written character recognition.
Cite
Text
Kramer and Sangiovanni-Vincentelli. "Efficient Parallel Learning Algorithms for Neural Networks." Neural Information Processing Systems, 1988.Markdown
[Kramer and Sangiovanni-Vincentelli. "Efficient Parallel Learning Algorithms for Neural Networks." Neural Information Processing Systems, 1988.](https://mlanthology.org/neurips/1988/kramer1988neurips-efficient/)BibTeX
@inproceedings{kramer1988neurips-efficient,
title = {{Efficient Parallel Learning Algorithms for Neural Networks}},
author = {Kramer, Alan H. and Sangiovanni-Vincentelli, Alberto},
booktitle = {Neural Information Processing Systems},
year = {1988},
pages = {40-48},
url = {https://mlanthology.org/neurips/1988/kramer1988neurips-efficient/}
}