Monotone Conditional Complexity Bounds on Future Prediction Errors

Abstract

We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution μ by the algorithmic complexity of μ. Here we assume we are at a time t > 1 and already observed x=x 1 ...x t . We bound the future prediction performance on x t+1 x t+2 ... by a new variant of algorithmic complexity of μ given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.

Cite

Text

Chernov and Hutter. "Monotone Conditional Complexity Bounds on Future Prediction Errors." International Conference on Algorithmic Learning Theory, 2005. doi:10.1007/11564089_32

Markdown

[Chernov and Hutter. "Monotone Conditional Complexity Bounds on Future Prediction Errors." International Conference on Algorithmic Learning Theory, 2005.](https://mlanthology.org/alt/2005/chernov2005alt-monotone/) doi:10.1007/11564089_32

BibTeX

@inproceedings{chernov2005alt-monotone,
  title     = {{Monotone Conditional Complexity Bounds on Future Prediction Errors}},
  author    = {Chernov, Alexey V. and Hutter, Marcus},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2005},
  pages     = {414-428},
  doi       = {10.1007/11564089_32},
  url       = {https://mlanthology.org/alt/2005/chernov2005alt-monotone/}
}