CoinDICE: Off-Policy Confidence Interval Estimation

Abstract

We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning, where the goal is to estimate a confidence interval on a target policy's value, given only access to a static experience dataset collected by unknown behavior policies. Starting from a function space embedding of the linear program formulation of the Q-function, we obtain an optimization problem with generalized estimating equation constraints. By applying the generalized empirical likelihood method to the resulting Lagrangian, we propose CoinDICE, a novel and efficient algorithm for computing confidence intervals. Theoretically, we prove the obtained confidence intervals are valid, in both asymptotic and finite-sample regimes. Empirically, we show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.

Cite

Text

Dai et al. "CoinDICE: Off-Policy Confidence Interval Estimation." Neural Information Processing Systems, 2020.

Markdown

[Dai et al. "CoinDICE: Off-Policy Confidence Interval Estimation." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/dai2020neurips-coindice/)

BibTeX

@inproceedings{dai2020neurips-coindice,
  title     = {{CoinDICE: Off-Policy Confidence Interval Estimation}},
  author    = {Dai, Bo and Nachum, Ofir and Chow, Yinlam and Li, Lihong and Szepesvari, Csaba and Schuurmans, Dale},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/dai2020neurips-coindice/}
}