PAC-Bayesian Generic Chaining
Abstract
There exist many different generalization error bounds for classification. Each of these bounds contains an improvement over the others for cer- tain situations. Our goal is to combine these different improvements into a single bound. In particular we combine the PAC-Bayes approach intro- duced by McAllester [1], which is interesting for averaging classifiers, with the optimal union bound provided by the generic chaining technique developed by Fernique and Talagrand [2]. This combination is quite nat- ural since the generic chaining is based on the notion of majorizing mea- sures, which can be considered as priors on the set of classifiers, and such priors also arise in the PAC-bayesian setting.
Cite
Text
Audibert and Bousquet. "PAC-Bayesian Generic Chaining." Neural Information Processing Systems, 2003.Markdown
[Audibert and Bousquet. "PAC-Bayesian Generic Chaining." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/audibert2003neurips-pacbayesian/)BibTeX
@inproceedings{audibert2003neurips-pacbayesian,
title = {{PAC-Bayesian Generic Chaining}},
author = {Audibert, Jean-yves and Bousquet, Olivier},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {1125-1132},
url = {https://mlanthology.org/neurips/2003/audibert2003neurips-pacbayesian/}
}