Computable Bayesian Compression for Uniformly Discretizable Statistical Models

Abstract

Supplementing Vovk and V’yugin’s ‘if’ statement, we show that Bayesian compression provides the best enumerable compression for parameter-typical data if and only if the parameter is Martin-Löf random with respect to the prior. The result is derived for uniformly discretizable statistical models, introduced here. They feature the crucial property that given a discretized parameter, we can compute how much data is needed to learn its value with little uncertainty. Exponential families and certain nonparametric models are shown to be uniformly discretizable.

Cite

Text

Debowski. "Computable Bayesian Compression for Uniformly Discretizable Statistical Models." International Conference on Algorithmic Learning Theory, 2009. doi:10.1007/978-3-642-04414-4_9

Markdown

[Debowski. "Computable Bayesian Compression for Uniformly Discretizable Statistical Models." International Conference on Algorithmic Learning Theory, 2009.](https://mlanthology.org/alt/2009/debowski2009alt-computable/) doi:10.1007/978-3-642-04414-4_9

BibTeX

@inproceedings{debowski2009alt-computable,
  title     = {{Computable Bayesian Compression for Uniformly Discretizable Statistical Models}},
  author    = {Debowski, Lukasz},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2009},
  pages     = {53-67},
  doi       = {10.1007/978-3-642-04414-4_9},
  url       = {https://mlanthology.org/alt/2009/debowski2009alt-computable/}
}