Algorithmic Identification of Probabilities Is Hard
Abstract
Suppose that we are given an infinite binary sequence which is random for a Bernoulli measure of parameter p. By the law of large numbers, the frequency of zeros in the sequence tends to p, and thus we can get better and better approximations of p as we read the sequence. We study in this paper a similar question, but from the viewpoint of inductive inference. We suppose now that p is a computable real, and one asks for more: as we are reading more and more bits of our random sequence, we have to eventually guess the exact parameter p (in the form of its Turing code). Can one do such a thing uniformly for all sequences that are random for computable Bernoulli measures, or even for a ‘large enough’ fraction of them? In this paper, we give a negative answer to this question. In fact, we prove a very general negative result which extends far beyond the class of Bernoulli measures.
Cite
Text
Bienvenu et al. "Algorithmic Identification of Probabilities Is Hard." International Conference on Algorithmic Learning Theory, 2014. doi:10.1007/978-3-319-11662-4_7Markdown
[Bienvenu et al. "Algorithmic Identification of Probabilities Is Hard." International Conference on Algorithmic Learning Theory, 2014.](https://mlanthology.org/alt/2014/bienvenu2014alt-algorithmic/) doi:10.1007/978-3-319-11662-4_7BibTeX
@inproceedings{bienvenu2014alt-algorithmic,
title = {{Algorithmic Identification of Probabilities Is Hard}},
author = {Bienvenu, Laurent and Monin, Benoit and Shen, Alexander},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2014},
pages = {85-95},
doi = {10.1007/978-3-319-11662-4_7},
url = {https://mlanthology.org/alt/2014/bienvenu2014alt-algorithmic/}
}