(Not) Bounding the True Error
Abstract
We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds. The method first con- structs a distribution over classifiers by determining how sensitive each parameter in the model is to noise. The true error rate of the stochastic classifier found with the sensitivity analysis can then be tightly bounded using a PAC-Bayes bound. In this paper we demonstrate the method on artificial neural networks with results of a order of magnitude im- provement vs. the best deterministic neural net bounds.
Cite
Text
Langford and Caruana. "(Not) Bounding the True Error." Neural Information Processing Systems, 2001.Markdown
[Langford and Caruana. "(Not) Bounding the True Error." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/langford2001neurips-bounding/)BibTeX
@inproceedings{langford2001neurips-bounding,
title = {{(Not) Bounding the True Error}},
author = {Langford, John and Caruana, Rich},
booktitle = {Neural Information Processing Systems},
year = {2001},
pages = {809-816},
url = {https://mlanthology.org/neurips/2001/langford2001neurips-bounding/}
}