A Smoothing Regularizer for Recurrent Neural Networks
Abstract
We derive a smoothing regularizer for recurrent network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generaliza(cid:173) tion of the first order Tikhonov stabilizer to dynamic models. The closed-form expression of the regularizer covers both time-lagged and simultaneous recurrent nets, with feedforward nets and one(cid:173) layer linear nets as special cases. We have successfully tested this regularizer in a number of case studies and found that it performs better than standard quadratic weight decay.
Cite
Text
Wu and Moody. "A Smoothing Regularizer for Recurrent Neural Networks." Neural Information Processing Systems, 1995.Markdown
[Wu and Moody. "A Smoothing Regularizer for Recurrent Neural Networks." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/wu1995neurips-smoothing/)BibTeX
@inproceedings{wu1995neurips-smoothing,
title = {{A Smoothing Regularizer for Recurrent Neural Networks}},
author = {Wu, Lizhong and Moody, John E.},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {458-464},
url = {https://mlanthology.org/neurips/1995/wu1995neurips-smoothing/}
}