Generalization Bounds for Uniformly Stable Algorithms
Abstract
Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in $[0,1]$, the generalization error of $\gamma$-uniformly stable learning algorithm on $n$ samples is known to be at most $O((\gamma +1/n) \sqrt{n \log(1/\delta)})$ with probability at least $1-\delta$. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where $\gamma \geq 1/\sqrt{n}$. At the same time the bound is known to be tight only when $\gamma = O(1/n)$. Here we prove substantially stronger generalization bounds for uniformly stable algorithms without any additional assumptions. First, we show that the generalization error in this setting is at most $O(\sqrt{(\gamma + 1/n) \log(1/\delta)})$ with probability at least $1-\delta$. In addition, we prove a tight bound of $O(\gamma^2 + 1/n)$ on the second moment of the generalization error. The best previous bound on the second moment of the generalization error is $O(\gamma + 1/n)$. Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
Cite
Text
Feldman and Vondrak. "Generalization Bounds for Uniformly Stable Algorithms." Neural Information Processing Systems, 2018.Markdown
[Feldman and Vondrak. "Generalization Bounds for Uniformly Stable Algorithms." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/feldman2018neurips-generalization/)BibTeX
@inproceedings{feldman2018neurips-generalization,
title = {{Generalization Bounds for Uniformly Stable Algorithms}},
author = {Feldman, Vitaly and Vondrak, Jan},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {9747-9757},
url = {https://mlanthology.org/neurips/2018/feldman2018neurips-generalization/}
}