Algorithmic Stability and Hypothesis Complexity

Abstract

We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.

Cite

Text

Liu et al. "Algorithmic Stability and Hypothesis Complexity." International Conference on Machine Learning, 2017.

Markdown

[Liu et al. "Algorithmic Stability and Hypothesis Complexity." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/liu2017icml-algorithmic/)

BibTeX

@inproceedings{liu2017icml-algorithmic,
  title     = {{Algorithmic Stability and Hypothesis Complexity}},
  author    = {Liu, Tongliang and Lugosi, Gábor and Neu, Gergely and Tao, Dacheng},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {2159-2167},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/liu2017icml-algorithmic/}
}