Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation

Abstract

Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.

Cite

Text

Tang et al. "Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation." International Conference on Learning Representations, 2020.

Markdown

[Tang et al. "Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/tang2020iclr-doubly/)

BibTeX

@inproceedings{tang2020iclr-doubly,
  title     = {{Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation}},
  author    = {Tang, Ziyang and Feng, Yihao and Li, Lihong and Zhou, Dengyong and Liu, Qiang},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/tang2020iclr-doubly/}
}