Reinforcement Learning via AIXI Approximation
Abstract
This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.
Cite
Text
Veness et al. "Reinforcement Learning via AIXI Approximation." AAAI Conference on Artificial Intelligence, 2010. doi:10.1609/AAAI.V24I1.7667Markdown
[Veness et al. "Reinforcement Learning via AIXI Approximation." AAAI Conference on Artificial Intelligence, 2010.](https://mlanthology.org/aaai/2010/veness2010aaai-reinforcement/) doi:10.1609/AAAI.V24I1.7667BibTeX
@inproceedings{veness2010aaai-reinforcement,
title = {{Reinforcement Learning via AIXI Approximation}},
author = {Veness, Joel and Ng, Kee Siong and Hutter, Marcus and Silver, David},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2010},
pages = {605-611},
doi = {10.1609/AAAI.V24I1.7667},
url = {https://mlanthology.org/aaai/2010/veness2010aaai-reinforcement/}
}