High Confidence Policy Improvement

Abstract

We present a batch reinforcement learning (RL) algorithm that provides probabilistic guarantees about the quality of each policy that it proposes, and which has no hyper-parameter that requires expert tuning. Specifically, the user may select any performance lower-bound and confidence level and our algorithm will ensure that the probability that it returns a policy with performance below the lower bound is at most the specified confidence level. We then propose an incremental algorithm that executes our policy improvement algorithm repeatedly to generate multiple policy improvements. We show the viability of our approach with a simple 4 x 4 gridworld and the standard mountain car problem, as well as with a digital marketing application that uses real world data.

Cite

Text

Thomas et al. "High Confidence Policy Improvement." International Conference on Machine Learning, 2015.

Markdown

[Thomas et al. "High Confidence Policy Improvement." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/thomas2015icml-high/)

BibTeX

@inproceedings{thomas2015icml-high,
  title     = {{High Confidence Policy Improvement}},
  author    = {Thomas, Philip and Theocharous, Georgios and Ghavamzadeh, Mohammad},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {2380-2388},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/thomas2015icml-high/}
}