A Limitation of the PAC-Bayes Framework
Abstract
PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution- and algorithm-dependent bounds, which are often tighter than VC-related uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just $O(\log(1/\delta)/\epsilon)$ examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.
Cite
Text
Livni and Moran. "A Limitation of the PAC-Bayes Framework." Neural Information Processing Systems, 2020.Markdown
[Livni and Moran. "A Limitation of the PAC-Bayes Framework." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/livni2020neurips-limitation/)BibTeX
@inproceedings{livni2020neurips-limitation,
title = {{A Limitation of the PAC-Bayes Framework}},
author = {Livni, Roi and Moran, Shay},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/livni2020neurips-limitation/}
}