Predictive Complexity and Information
Abstract
A new notion of predictive complexity and corresponding amount of information are considered. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. We consider predictive complexity for a wide class of bounded loss functions which are generalizations of square-loss function. Relations between unconditional KG ( x ) and conditional KG ( xy ) predictive complexities are studied. We define an algorithm which has some “expanding property”. It transforms with positive probability sequences of given predictive complexity into sequences of essentially bigger predictive complexity. A concept of amount of predictive information IG ( y : x ) is studied. We show that this information is non-commutative in a very strong sense and present asymptotic relations between values IG ( y : x ), IG ( x : y ), KG ( x ) and KG ( y ).
Cite
Text
Vyugin and V'yugin. "Predictive Complexity and Information." Annual Conference on Computational Learning Theory, 2002. doi:10.1007/3-540-45435-7_7Markdown
[Vyugin and V'yugin. "Predictive Complexity and Information." Annual Conference on Computational Learning Theory, 2002.](https://mlanthology.org/colt/2002/vyugin2002colt-predictive/) doi:10.1007/3-540-45435-7_7BibTeX
@inproceedings{vyugin2002colt-predictive,
title = {{Predictive Complexity and Information}},
author = {Vyugin, Michael V. and V'yugin, Vladimir V.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2002},
pages = {90-104},
doi = {10.1007/3-540-45435-7_7},
url = {https://mlanthology.org/colt/2002/vyugin2002colt-predictive/}
}