PAC-Bayes Analysis Beyond the Usual Bounds

Abstract

We focus on a stochastic learning model where the learner observes a finite set of training examples and the output of the learning process is a data-dependent distribution over a space of hypotheses. The learned data-dependent distribution is then used to make randomized predictions, and the high-level theme addressed here is guaranteeing the quality of predictions on examples that were not seen during training, i.e. generalization. In this setting the unknown quantity of interest is the expected risk of the data-dependent randomized predictor, for which upper bounds can be derived via a PAC-Bayes analysis, leading to PAC-Bayes bounds.

Cite

Text

Rivasplata et al. "PAC-Bayes Analysis Beyond the Usual Bounds." Neural Information Processing Systems, 2020.

Markdown

[Rivasplata et al. "PAC-Bayes Analysis Beyond the Usual Bounds." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/rivasplata2020neurips-pacbayes/)

BibTeX

@inproceedings{rivasplata2020neurips-pacbayes,
  title     = {{PAC-Bayes Analysis Beyond the Usual Bounds}},
  author    = {Rivasplata, Omar and Kuzborskij, Ilja and Szepesvari, Csaba and Shawe-Taylor, John},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/rivasplata2020neurips-pacbayes/}
}