Generalization Error of Generalized Linear Models in High Dimensions
Abstract
At the heart of machine learning lies the question of generalizability of learned rules over previously unseen data. While over-parameterized models based on neural networks are now ubiquitous in machine learning applications, our understanding of their generalization capabilities is incomplete and this task is made harder by the non-convexity of the underlying learning problems. We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i.e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems. This framework enables analyzing the effect of (i) over-parameterization and non-linearity during modeling; (ii) choices of loss function, initialization, and regularizer during learning; and (iii) mismatch between training and test distributions. As examples, we analyze a few special cases, namely linear regression and logistic regression. We are also able to rigorously and analytically explain the \emph{double descent} phenomenon in generalized linear models.
Cite
Text
Emami et al. "Generalization Error of Generalized Linear Models in High Dimensions." International Conference on Machine Learning, 2020.Markdown
[Emami et al. "Generalization Error of Generalized Linear Models in High Dimensions." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/emami2020icml-generalization/)BibTeX
@inproceedings{emami2020icml-generalization,
title = {{Generalization Error of Generalized Linear Models in High Dimensions}},
author = {Emami, Melikasadat and Sahraee-Ardakan, Mojtaba and Pandit, Parthe and Rangan, Sundeep and Fletcher, Alyson},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {2892-2901},
volume = {119},
url = {https://mlanthology.org/icml/2020/emami2020icml-generalization/}
}