Random Subclass Bounds
Abstract
It has been recently shown that sharp generalization bounds can be obtained when the function class from which the algorithm choo-ses its hypotheses is “small” in the sense that the Rademacher averages of this function class are small [8,9]. Seemingly based on different arguments, generalization bounds were obtained in the compression scheme [7], luckiness [13], and algorithmic luckiness [6] frameworks in which the “size” of the function class is not specified a priori. We show that the bounds obtained in all these frameworks follow from the same general principle, namely that coordinate projections of this function subclass evaluated on random samples are “small” with high probability.
Cite
Text
Mendelson and Philips. "Random Subclass Bounds." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_25Markdown
[Mendelson and Philips. "Random Subclass Bounds." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/mendelson2003colt-random/) doi:10.1007/978-3-540-45167-9_25BibTeX
@inproceedings{mendelson2003colt-random,
title = {{Random Subclass Bounds}},
author = {Mendelson, Shahar and Philips, Petra},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {329-343},
doi = {10.1007/978-3-540-45167-9_25},
url = {https://mlanthology.org/colt/2003/mendelson2003colt-random/}
}