Sequential Prediction with Coded Side Information Under Logarithmic Loss

Abstract

We study the problem of sequential prediction with coded side information under logarithmic loss (log-loss). We show an operational equivalence between this setup and lossy compression with log-loss distortion. Using this insight, together with recent work on lossy compression with log-loss, we connect prediction strategies with distributions in a certain subset of the probability simplex. This allows us to derive a Shtarkov-like bound for regret and to evaluate the regret for several illustrative classes of experts. In the present work, we mainly focus on the “batch” side information setting with sequential prediction.

Cite

Text

Shkel et al. "Sequential Prediction with Coded Side Information Under Logarithmic Loss." Proceedings of Algorithmic Learning Theory, 2018.

Markdown

[Shkel et al. "Sequential Prediction with Coded Side Information Under Logarithmic Loss." Proceedings of Algorithmic Learning Theory, 2018.](https://mlanthology.org/alt/2018/shkel2018alt-sequential/)

BibTeX

@inproceedings{shkel2018alt-sequential,
  title     = {{Sequential Prediction with Coded Side Information Under Logarithmic Loss}},
  author    = {Shkel, Yanina and Raginsky, Maxim and Verdú, Sergio},
  booktitle = {Proceedings of Algorithmic Learning Theory},
  year      = {2018},
  pages     = {753-769},
  volume    = {83},
  url       = {https://mlanthology.org/alt/2018/shkel2018alt-sequential/}
}