Incorporating Domain Models into Bayesian Optimization for RL

Abstract

In many Reinforcement Learning (RL) domains there is a high cost for generating experience in order to evaluate an agent’s performance. An appealing approach to reducing the number of expensive evaluations is Bayesian Optimization (BO), which is a framework for global optimization of noisy and costly to evaluate functions. Prior work in a number of RL domains has demonstrated the effectiveness of BO for optimizing parametric policies. However, those approaches completely ignore the state-transition sequence of policy executions and only consider the total reward achieved. In this paper, we study how to more effectively incorporate all of the information observed during policy executions into the BO framework. In particular, our approach uses the observed data to learn approximate transitions models that allow for Monte-Carlo predictions of policy returns. The models are then incorporated into the BO framework as a type of prior on policy returns, which can better inform the BO process. The resulting algorithm provides a new approach for leveraging learned models in RL even when there is no planner available for exploiting those models. We demonstrate the effectiveness of our algorithm in four benchmark domains, which have dynamics of variable complexity. Results indicate that our algorithm effectively combines model based predictions to improve the data efficiency of model free BO methods, and is robust to modeling errors when parts of the domain cannot be modeled successfully.

Cite

Text

Wilson et al. "Incorporating Domain Models into Bayesian Optimization for RL." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2010. doi:10.1007/978-3-642-15939-8_30

Markdown

[Wilson et al. "Incorporating Domain Models into Bayesian Optimization for RL." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2010.](https://mlanthology.org/ecmlpkdd/2010/wilson2010ecmlpkdd-incorporating/) doi:10.1007/978-3-642-15939-8_30

BibTeX

@inproceedings{wilson2010ecmlpkdd-incorporating,
  title     = {{Incorporating Domain Models into Bayesian Optimization for RL}},
  author    = {Wilson, Aaron and Fern, Alan and Tadepalli, Prasad},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2010},
  pages     = {467-482},
  doi       = {10.1007/978-3-642-15939-8_30},
  url       = {https://mlanthology.org/ecmlpkdd/2010/wilson2010ecmlpkdd-incorporating/}
}