Data-Efficient Policy Evaluation Through Behavior Policy Search

Abstract

We consider the task of evaluating a policy for a Markov decision process (MDP). The standard unbiased technique for evaluating a policy is to deploy the policy and observe its performance. We show that the data collected from deploying a different policy, commonly called the behavior policy, can be used to produce unbiased estimates with lower mean squared error than this standard technique. We derive an analytic expression for a minimal variance behavior policy -- a behavior policy that minimizes the mean squared error of the resulting estimates. Because this expression depends on terms that are unknown in practice, we propose a novel policy evaluation sub-problem, behavior policy search: searching for a behavior policy that reduces mean squared error. We present two behavior policy search algorithms and empirically demonstrate their effectiveness in lowering the mean squared error of policy performance estimates.

Cite

Text

Hanna et al. "Data-Efficient Policy Evaluation Through Behavior Policy Search." Journal of Machine Learning Research, 2024.

Markdown

[Hanna et al. "Data-Efficient Policy Evaluation Through Behavior Policy Search." Journal of Machine Learning Research, 2024.](https://mlanthology.org/jmlr/2024/hanna2024jmlr-dataefficient/)

BibTeX

@article{hanna2024jmlr-dataefficient,
  title     = {{Data-Efficient Policy Evaluation Through Behavior Policy Search}},
  author    = {Hanna, Josiah P. and Chandak, Yash and Thomas, Philip S. and White, Martha and Stone, Peter and Niekum, Scott},
  journal   = {Journal of Machine Learning Research},
  year      = {2024},
  pages     = {1-58},
  volume    = {25},
  url       = {https://mlanthology.org/jmlr/2024/hanna2024jmlr-dataefficient/}
}