A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models

Abstract

We prove a new generalization bound that shows for any class of linear predictors in Gaussian space, the Rademacher complexity of the class and the training error under any continuous loss $\ell$ can control the test error under all Moreau envelopes of the loss $\ell$ . We use our finite-sample bound to directly recover the “optimistic rate” of Zhou et al. (2021) for linear regression with the square loss, which is known to be tight for minimal $\ell_2$-norm interpolation, but we also handle more general settings where the label is generated by a potentially misspecified multi-index model. The same argument can analyze noisy interpolation of max-margin classifiers through the squared hinge loss, and establishes consistency results in spiked-covariance settings. More generally, when the loss is only assumed to be Lipschitz, our bound effectively improves Talagrand’s well-known contraction lemma by a factor of two, and we prove uniform convergence of interpolators (Koehler et al. 2021) for all smooth, non-negative losses. Finally, we show that application of our generalization bound using localized Gaussian width will generally be sharp for empirical risk minimizers, establishing a non-asymptotic Moreau envelope theory for generalization that applies outside of proportional scaling regimes, handles model misspecification, and complements existing asymptotic Moreau envelope theories for M-estimation.

Cite

Text

Zhou et al. "A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models." Neural Information Processing Systems, 2022.

Markdown

[Zhou et al. "A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhou2022neurips-nonasymptotic/)

BibTeX

@inproceedings{zhou2022neurips-nonasymptotic,
  title     = {{A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models}},
  author    = {Zhou, Lijia and Koehler, Frederic and Sur, Pragya and Sutherland, Danica J. and Srebro, Nati},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhou2022neurips-nonasymptotic/}
}