PAC-Bayes Bounds for Stable Algorithms with Instance-Dependent Priors
Abstract
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper the PAC-Bayes approach is combined with stability of the hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting is used with a Gaussian prior centered at the expected output. Thus a novelty of our paper is using priors defined in terms of the data-generating distribution. Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients. We also provide a new bound for the SVM classifier, which is compared to other known bounds experimentally. Ours appears to be the first uniform hypothesis stability-based bound that evaluates to non-trivial values.
Cite
Text
Rivasplata et al. "PAC-Bayes Bounds for Stable Algorithms with Instance-Dependent Priors." Neural Information Processing Systems, 2018.Markdown
[Rivasplata et al. "PAC-Bayes Bounds for Stable Algorithms with Instance-Dependent Priors." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/rivasplata2018neurips-pacbayes/)BibTeX
@inproceedings{rivasplata2018neurips-pacbayes,
title = {{PAC-Bayes Bounds for Stable Algorithms with Instance-Dependent Priors}},
author = {Rivasplata, Omar and Parrado-Hernandez, Emilio and Shawe-Taylor, John S and Sun, Shiliang and Szepesvari, Csaba},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {9214-9224},
url = {https://mlanthology.org/neurips/2018/rivasplata2018neurips-pacbayes/}
}