Risk Bounds for Randomized Sample Compressed Classifiers
Abstract
We derive risk bounds for the randomized classifiers in Sample Compressions settings where the classifier-specification utilizes two sources of information viz. the compression set and the message string. By extending the recently proposed Occamâs Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound. We further show how these compare favorably to the existing results.
Cite
Text
Shah. "Risk Bounds for Randomized Sample Compressed Classifiers." Neural Information Processing Systems, 2008.Markdown
[Shah. "Risk Bounds for Randomized Sample Compressed Classifiers." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/shah2008neurips-risk/)BibTeX
@inproceedings{shah2008neurips-risk,
title = {{Risk Bounds for Randomized Sample Compressed Classifiers}},
author = {Shah, Mohak},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {1449-1456},
url = {https://mlanthology.org/neurips/2008/shah2008neurips-risk/}
}