Bayesian Sparse Sampling for On-Line Reward Optimization
Abstract
We present an efficient "sparse sampling" technique for approximating Bayes optimal decision making in reinforcement learning, addressing the well known exploration versus exploitation tradeoff. Our approach combines sparse sampling with Bayesian exploration to achieve improved decision making while controlling computational cost. The idea is to grow a sparse lookahead tree, intelligently, by exploiting information in a Bayesian posterior---rather than enumerate action branches (standard sparse sampling) or compensate myopically (value of perfect information). The outcome is a flexible, practical technique for improving action selection in simple reinforcement learning scenarios.
Cite
Text
Wang et al. "Bayesian Sparse Sampling for On-Line Reward Optimization." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102472Markdown
[Wang et al. "Bayesian Sparse Sampling for On-Line Reward Optimization." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/wang2005icml-bayesian/) doi:10.1145/1102351.1102472BibTeX
@inproceedings{wang2005icml-bayesian,
title = {{Bayesian Sparse Sampling for On-Line Reward Optimization}},
author = {Wang, Tao and Lizotte, Daniel J. and Bowling, Michael H. and Schuurmans, Dale},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {956-963},
doi = {10.1145/1102351.1102472},
url = {https://mlanthology.org/icml/2005/wang2005icml-bayesian/}
}