Near-Optimal BRL Using Optimistic Local Transitions
Abstract
Model-based Bayesian Reinforcement Learning (BRL) allows a sound formalization of the problem of acting optimally while facing an unknown environment, i.e., avoiding the exploration-exploitation dilemma. However, algorithms explicitly addressing BRL suffer from such a combinatorial explosion that a large body of work relies on heuristic algorithms. This paper introduces BOLT, a simple and (almost) deterministic heuristic algorithm for BRL which is optimistic about the transition function. We analyze BOLT's sample complexity, and show that under certain parameters, the algorithm is near-optimal in the Bayesian sense with high probability. Then, experimental results highlight the key differences of this method compared to previous work.
Cite
Text
Araya-López et al. "Near-Optimal BRL Using Optimistic Local Transitions." International Conference on Machine Learning, 2012.Markdown
[Araya-López et al. "Near-Optimal BRL Using Optimistic Local Transitions." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/arayalopez2012icml-near/)BibTeX
@inproceedings{arayalopez2012icml-near,
title = {{Near-Optimal BRL Using Optimistic Local Transitions}},
author = {Araya-López, Mauricio and Buffet, Olivier and Thomas, Vincent},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/arayalopez2012icml-near/}
}