Learning Without Concentration
Abstract
We obtain sharp bounds on the estimation error of the Empirical Risk Minimization procedure, performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a “small-ball” assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the “noise level” of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.
Cite
Text
Mendelson. "Learning Without Concentration." Annual Conference on Computational Learning Theory, 2014. doi:10.1145/2699439Markdown
[Mendelson. "Learning Without Concentration." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/mendelson2014colt-learning/) doi:10.1145/2699439BibTeX
@inproceedings{mendelson2014colt-learning,
title = {{Learning Without Concentration}},
author = {Mendelson, Shahar},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {25-39},
doi = {10.1145/2699439},
url = {https://mlanthology.org/colt/2014/mendelson2014colt-learning/}
}