How Many Strings Are Easy to Predict?
Abstract
It is well known in the theory of Kolmogorov complexity that most strings cannot be compressed; more precisely, only exponentially few (Θ(2^ n − m )) strings of length n can be compressed by m bits. This paper extends the ‘incompressibility’ property of Kolmogorov complexity to the ‘unpredictability’ property of predictive complexity. The ‘unpredictability’ property states that predictive complexity (defined as the loss suffered by a universal prediction algorithm working infinitely long) of most strings is close to a trivial upper bound (the loss suffered by a trivial minimax constant prediction strategy). We show that only exponentially few strings can be successfully predicted and find the base of the exponent.
Cite
Text
Kalnishkan et al. "How Many Strings Are Easy to Predict?." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_38Markdown
[Kalnishkan et al. "How Many Strings Are Easy to Predict?." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/kalnishkan2003colt-many/) doi:10.1007/978-3-540-45167-9_38BibTeX
@inproceedings{kalnishkan2003colt-many,
title = {{How Many Strings Are Easy to Predict?}},
author = {Kalnishkan, Yuri and Vovk, Vladimir and Vyugin, Michael V.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {522-536},
doi = {10.1007/978-3-540-45167-9_38},
url = {https://mlanthology.org/colt/2003/kalnishkan2003colt-many/}
}