Learning with Square Loss: Localization Through Offset Rademacher Complexity

Abstract

We consider regression with square loss and general classes of functions without the boundedness assumption. We introduce a notion of offset Rademacher complexity that provides a transparent way to study localization both in expectation and in high probability. For any (possibly non-convex) class, the excess loss of a two-step estimator is shown to be upper bounded by this offset complexity through a novel geometric inequality. In the convex case, the estimator reduces to an empirical risk minimizer. The method recovers the results of \citep{RakSriTsy15} for the bounded case while also providing guarantees without the boundedness assumption.

Cite

Text

Liang et al. "Learning with Square Loss: Localization Through Offset Rademacher Complexity." Annual Conference on Computational Learning Theory, 2015.

Markdown

[Liang et al. "Learning with Square Loss: Localization Through Offset Rademacher Complexity." Annual Conference on Computational Learning Theory, 2015.](https://mlanthology.org/colt/2015/liang2015colt-learning/)

BibTeX

@inproceedings{liang2015colt-learning,
  title     = {{Learning with Square Loss: Localization Through Offset Rademacher Complexity}},
  author    = {Liang, Tengyuan and Rakhlin, Alexander and Sridharan, Karthik},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2015},
  pages     = {1260-1285},
  url       = {https://mlanthology.org/colt/2015/liang2015colt-learning/}
}