A PAC-Bayes Bound for Tailored Density Estimation
Abstract
In this paper we construct a general method for reporting on the accuracy of density estimation. Using variational methods from statistical learning theory we derive a PAC, algorithm-dependent bound on the distance between the data generating distribution and a learned approximation. The distance measure takes the role of a loss function that can be tailored to the learning problem, enabling us to control discrepancies on tasks relevant to subsequent inference. We apply the bound to an efficient mixture learning algorithm. Using the method of localisation we encode properties of both the algorithm and the data generating distribution, producing a tight, empirical, algorithm-dependent upper risk bound on the performance of the learner. We discuss other uses of the bound for arbitrary distributions and model averaging.
Cite
Text
Higgs and Shawe-Taylor. "A PAC-Bayes Bound for Tailored Density Estimation." International Conference on Algorithmic Learning Theory, 2010. doi:10.1007/978-3-642-16108-7_15Markdown
[Higgs and Shawe-Taylor. "A PAC-Bayes Bound for Tailored Density Estimation." International Conference on Algorithmic Learning Theory, 2010.](https://mlanthology.org/alt/2010/higgs2010alt-pacbayes/) doi:10.1007/978-3-642-16108-7_15BibTeX
@inproceedings{higgs2010alt-pacbayes,
title = {{A PAC-Bayes Bound for Tailored Density Estimation}},
author = {Higgs, Matthew and Shawe-Taylor, John},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2010},
pages = {148-162},
doi = {10.1007/978-3-642-16108-7_15},
url = {https://mlanthology.org/alt/2010/higgs2010alt-pacbayes/}
}