Dynamic Behavior of Constained Back-Propagation Networks
Abstract
The learning dynamics of the back-propagation algorithm are in(cid:173) vestigated when complexity constraints are added to the standard Least Mean Square (LMS) cost function. It is shown that loss of generalization performance due to overtraining can be avoided when using such complexity constraints. Furthermore, "energy," hidden representations and weight distributions are observed and compared during learning. An attempt is made at explaining the results in terms of linear and non-linear effects in relation to the gradient descent learning algorithm.
Cite
Text
Chauvin. "Dynamic Behavior of Constained Back-Propagation Networks." Neural Information Processing Systems, 1989.Markdown
[Chauvin. "Dynamic Behavior of Constained Back-Propagation Networks." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/chauvin1989neurips-dynamic/)BibTeX
@inproceedings{chauvin1989neurips-dynamic,
title = {{Dynamic Behavior of Constained Back-Propagation Networks}},
author = {Chauvin, Yves},
booktitle = {Neural Information Processing Systems},
year = {1989},
pages = {642-649},
url = {https://mlanthology.org/neurips/1989/chauvin1989neurips-dynamic/}
}