Extended Regularization Methods for Nonconvergent Model Selection

Abstract

Many techniques for model selection in the field of neural networks correspond to well established statistical methods. The method of 'stopped training', on the other hand, in which an oversized network is trained until the error on a further validation set of ex(cid:173) amples deteriorates, then training is stopped, is a true innovation, since model selection doesn't require convergence of the training process. In this paper we show that this performance can be significantly enhanced by extending the 'non convergent model selection method' of stopped training to include dynamic topology modifications (dynamic weight pruning) and modified complexity penalty term methods in which the weighting of the penalty term is adjusted during the training process.

Cite

Text

Finnoff et al. "Extended Regularization Methods for Nonconvergent Model Selection." Neural Information Processing Systems, 1992.

Markdown

[Finnoff et al. "Extended Regularization Methods for Nonconvergent Model Selection." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/finnoff1992neurips-extended/)

BibTeX

@inproceedings{finnoff1992neurips-extended,
  title     = {{Extended Regularization Methods for Nonconvergent Model Selection}},
  author    = {Finnoff, W. and Hergert, F. and Zimmermann, H. G.},
  booktitle = {Neural Information Processing Systems},
  year      = {1992},
  pages     = {228-235},
  url       = {https://mlanthology.org/neurips/1992/finnoff1992neurips-extended/}
}