Asymptotic Log-Loss of Prequential Maximum Likelihood Codes

Abstract

We analyze the Dawid-Rissanen prequential maximum likelihood codes relative to one-parameter exponential family models M. If data are i.i.d. according to an (essentially) arbitrary P, then the redundancy grows at rate 1/2 c ln n. We show that c = σ 2 1 /σ 2 2 , where σ 2 1 is the variance of P, and σ 2 2 is the variance of the distribution M* ∈ M that is closest to P in KL divergence. This shows that prequential codes behave quite differently from other important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for which c = 1. This behavior is undesirable in an MDL model selection setting.

Cite

Text

Grünwald and de Rooij. "Asymptotic Log-Loss of Prequential Maximum Likelihood Codes." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_44

Markdown

[Grünwald and de Rooij. "Asymptotic Log-Loss of Prequential Maximum Likelihood Codes." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/grunwald2005colt-asymptotic/) doi:10.1007/11503415_44

BibTeX

@inproceedings{grunwald2005colt-asymptotic,
  title     = {{Asymptotic Log-Loss of Prequential Maximum Likelihood Codes}},
  author    = {Grünwald, Peter and de Rooij, Steven},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2005},
  pages     = {652-667},
  doi       = {10.1007/11503415_44},
  url       = {https://mlanthology.org/colt/2005/grunwald2005colt-asymptotic/}
}