Optimal Off-Policy Evaluation from Multiple Logging Policies

Abstract

We study off-policy evaluation (OPE) from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling. Previous work noted that in this setting the ordering of the variances of different importance sampling estimators is instance-dependent, which brings up a dilemma as to which importance sampling weights to use. In this paper, we resolve this dilemma by finding the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one. In particular, we establish the efficiency bound under stratified sampling and propose an estimator achieving this bound when given consistent $q$-estimates. To guard against misspecification of $q$-functions, we also provide a way to choose the control variate in a hypothesis class to minimize variance. Extensive experiments demonstrate the benefits of our methods’ efficiently leveraging of the stratified sampling of off-policy data from multiple loggers.

Cite

Text

Kallus et al. "Optimal Off-Policy Evaluation from Multiple Logging Policies." International Conference on Machine Learning, 2021.

Markdown

[Kallus et al. "Optimal Off-Policy Evaluation from Multiple Logging Policies." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/kallus2021icml-optimal/)

BibTeX

@inproceedings{kallus2021icml-optimal,
  title     = {{Optimal Off-Policy Evaluation from Multiple Logging Policies}},
  author    = {Kallus, Nathan and Saito, Yuta and Uehara, Masatoshi},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5247-5256},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/kallus2021icml-optimal/}
}