PAC-Bayes Learning Bounds for Sample-Dependent Priors

Abstract

We present a series of new PAC-Bayes learning guarantees for randomized algorithms with sample-dependent priors. Our most general bounds make no assumption on the priors and are given in terms of certain covering numbers under the infinite-Renyi divergence and the L1 distance. We show how to use these general bounds to derive leaning bounds in the setting where the sample-dependent priors obey an infinite-Renyi divergence or L1-distance sensitivity condition. We also provide a flexible framework for computing PAC-Bayes bounds, under certain stability assumptions on the sample-dependent priors, and show how to use this framework to give more refined bounds when the priors satisfy an infinite-Renyi divergence sensitivity condition.

Cite

Text

Awasthi et al. "PAC-Bayes Learning Bounds for Sample-Dependent Priors." Neural Information Processing Systems, 2020.

Markdown

[Awasthi et al. "PAC-Bayes Learning Bounds for Sample-Dependent Priors." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/awasthi2020neurips-pacbayes/)

BibTeX

@inproceedings{awasthi2020neurips-pacbayes,
  title     = {{PAC-Bayes Learning Bounds for Sample-Dependent Priors}},
  author    = {Awasthi, Pranjal and Kale, Satyen and Karp, Stefani and Mohri, Mehryar},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/awasthi2020neurips-pacbayes/}
}