Minimax Value Interval for Off-Policy Evaluation and Policy Optimization

Abstract

We study minimax methods for off-policy evaluation (OPE) using value functions and marginalized importance weights. Despite that they hold promises of overcoming the exponential variance in traditional importance sampling, several key problems remain: (1) They require function approximation and are generally biased. For the sake of trustworthy OPE, is there anyway to quantify the biases? (2) They are split into two styles (“weight-learning” vs “value-learning”). Can we unify them? In this paper we answer both questions positively. By slightly altering the derivation of previous methods (one from each style), we unify them into a single value interval that comes with a special type of double robustness: when either the value-function or the importance-weight class is well specified, the interval is valid and its length quantifies the misspecification of the other class. Our interval also provides a unified view of and new insights to some recent methods, and we further explore the implications of our results on exploration and exploitation in off-policy policy optimization with insufficient data coverage.

Cite

Text

Jiang and Huang. "Minimax Value Interval for Off-Policy Evaluation and Policy Optimization." Neural Information Processing Systems, 2020.

Markdown

[Jiang and Huang. "Minimax Value Interval for Off-Policy Evaluation and Policy Optimization." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/jiang2020neurips-minimax/)

BibTeX

@inproceedings{jiang2020neurips-minimax,
  title     = {{Minimax Value Interval for Off-Policy Evaluation and Policy Optimization}},
  author    = {Jiang, Nan and Huang, Jiawei},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/jiang2020neurips-minimax/}
}