Cover Tree Bayesian Reinforcement Learning
Abstract
This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unknown environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with a Gaussian process model, a linear model and simple least squares policy iteration.
Cite
Text
Tziortziotis et al. "Cover Tree Bayesian Reinforcement Learning." Journal of Machine Learning Research, 2014.Markdown
[Tziortziotis et al. "Cover Tree Bayesian Reinforcement Learning." Journal of Machine Learning Research, 2014.](https://mlanthology.org/jmlr/2014/tziortziotis2014jmlr-cover/)BibTeX
@article{tziortziotis2014jmlr-cover,
title = {{Cover Tree Bayesian Reinforcement Learning}},
author = {Tziortziotis, Nikolaos and Dimitrakakis, Christos and Blekas, Konstantinos},
journal = {Journal of Machine Learning Research},
year = {2014},
pages = {2313-2335},
volume = {15},
url = {https://mlanthology.org/jmlr/2014/tziortziotis2014jmlr-cover/}
}