Compress and Control

Abstract

This paper describes a new information-theoretic policy evaluation technique for reinforcement learning. This technique converts any compression or density model into a corresponding estimate of value. Under appropriate stationarity and ergodicity conditions, we show that the use of a sufficiently powerful model gives rise to a consistent value function estimator. We also study the behavior of this technique when applied to various Atari 2600 video games, where the use of suboptimal modeling techniques is unavoidable. We consider three fundamentally different models, all too limited to perfectly model the dynamics of the system. Remarkably, we find that our technique provides sufficiently accurate value estimates for effective on-policy control. We conclude with a suggestive study highlighting the potential of our technique to scale to large problems.

Cite

Text

Veness et al. "Compress and Control." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9600

Markdown

[Veness et al. "Compress and Control." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/veness2015aaai-compress/) doi:10.1609/AAAI.V29I1.9600

BibTeX

@inproceedings{veness2015aaai-compress,
  title     = {{Compress and Control}},
  author    = {Veness, Joel and Bellemare, Marc G. and Hutter, Marcus and Chua, Alvin and Desjardins, Guillaume},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {3016-3023},
  doi       = {10.1609/AAAI.V29I1.9600},
  url       = {https://mlanthology.org/aaai/2015/veness2015aaai-compress/}
}