PAC-Bayes Risk Bounds for Sample-Compressed Gibbs Classifiers
Abstract
We extend the PAC-Bayes theorem to the sample-compression setting where each classifier is represented by two independent sources of information: a compression set which consists of a small subset of the training data, and a message string of the additional information needed to obtain a classifier. The new bound is obtained by using a prior over a data-independent set of objects where each object gives a classifier only when the training data is provided. The new PAC-Bayes theorem states that a Gibbs classifier defined on a posterior over sample-compressed classifiers can have a smaller risk bound than any such (deterministic) sample-compressed classifier.
Cite
Text
Laviolette and Marchand. "PAC-Bayes Risk Bounds for Sample-Compressed Gibbs Classifiers." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102412Markdown
[Laviolette and Marchand. "PAC-Bayes Risk Bounds for Sample-Compressed Gibbs Classifiers." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/laviolette2005icml-pac/) doi:10.1145/1102351.1102412BibTeX
@inproceedings{laviolette2005icml-pac,
title = {{PAC-Bayes Risk Bounds for Sample-Compressed Gibbs Classifiers}},
author = {Laviolette, François and Marchand, Mario},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {481-488},
doi = {10.1145/1102351.1102412},
url = {https://mlanthology.org/icml/2005/laviolette2005icml-pac/}
}