Second-Order Learning Algorithm with Squared Penalty Term
Abstract
This paper compares three penalty terms with respect to the effi(cid:173) ciency of supervised learning, by using first- and second-order learn(cid:173) ing algorithms. Our experiments showed that for a reasonably ade(cid:173) quate penalty factor, the combination of the squared penalty term and the second-order learning algorithm drastically improves the convergence performance more than 20 times over the other com(cid:173) binations, at the same time bringing about a better generalization performance.
Cite
Text
Saito and Nakano. "Second-Order Learning Algorithm with Squared Penalty Term." Neural Information Processing Systems, 1996.Markdown
[Saito and Nakano. "Second-Order Learning Algorithm with Squared Penalty Term." Neural Information Processing Systems, 1996.](https://mlanthology.org/neurips/1996/saito1996neurips-secondorder/)BibTeX
@inproceedings{saito1996neurips-secondorder,
title = {{Second-Order Learning Algorithm with Squared Penalty Term}},
author = {Saito, Kazumi and Nakano, Ryohei},
booktitle = {Neural Information Processing Systems},
year = {1996},
pages = {627-633},
url = {https://mlanthology.org/neurips/1996/saito1996neurips-secondorder/}
}