Asymptotic Theory for Regularization: One-Dimensional Linear Case
Abstract
The generalization ability of a neural network can sometimes be improved dramatically by regularization. To analyze the improve(cid:173) ment one needs more refined results than the asymptotic distri(cid:173) bution of the weight vector. Here we study the simple case of one-dimensional linear regression under quadratic regularization, i.e., ridge regression. We study the random design, misspecified case, where we derive expansions for the optimal regularization pa(cid:173) rameter and the ensuing improvement. It is possible to construct examples where it is best to use no regularization.
Cite
Text
Koistinen. "Asymptotic Theory for Regularization: One-Dimensional Linear Case." Neural Information Processing Systems, 1997.Markdown
[Koistinen. "Asymptotic Theory for Regularization: One-Dimensional Linear Case." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/koistinen1997neurips-asymptotic/)BibTeX
@inproceedings{koistinen1997neurips-asymptotic,
title = {{Asymptotic Theory for Regularization: One-Dimensional Linear Case}},
author = {Koistinen, Petri},
booktitle = {Neural Information Processing Systems},
year = {1997},
pages = {294-300},
url = {https://mlanthology.org/neurips/1997/koistinen1997neurips-asymptotic/}
}