Failures of Model-Dependent Generalization Bounds for Least-Norm Interpolation
Abstract
We consider bounds on the generalization performance of the least-norm linear regressor, in the over-parameterized regime where it can interpolate the data. We describe a sense in which any generalization bound of a type that is commonly proved in statistical learning theory must sometimes be very loose when applied to analyze the least-norm interpolant. In particular, for a variety of natural joint distributions on training examples, any valid generalization bound that depends only on the output of the learning algorithm, the number of training examples, and the confidence parameter, and that satisfies a mild condition (substantially weaker than monotonicity in sample size), must sometimes be very loose - it can be bounded below by a constant when the true excess risk goes to zero.
Cite
Text
Bartlett and Long. "Failures of Model-Dependent Generalization Bounds for Least-Norm Interpolation." Journal of Machine Learning Research, 2021.Markdown
[Bartlett and Long. "Failures of Model-Dependent Generalization Bounds for Least-Norm Interpolation." Journal of Machine Learning Research, 2021.](https://mlanthology.org/jmlr/2021/bartlett2021jmlr-failures/)BibTeX
@article{bartlett2021jmlr-failures,
title = {{Failures of Model-Dependent Generalization Bounds for Least-Norm Interpolation}},
author = {Bartlett, Peter L. and Long, Philip M.},
journal = {Journal of Machine Learning Research},
year = {2021},
pages = {1-15},
volume = {22},
url = {https://mlanthology.org/jmlr/2021/bartlett2021jmlr-failures/}
}