Balanced Off-Policy Evaluation in General Action Spaces

Abstract

Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance—a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution. In turn, this is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.

Cite

Text

Sondhi et al. "Balanced Off-Policy Evaluation in General Action Spaces." Artificial Intelligence and Statistics, 2020.

Markdown

[Sondhi et al. "Balanced Off-Policy Evaluation in General Action Spaces." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/sondhi2020aistats-balanced/)

BibTeX

@inproceedings{sondhi2020aistats-balanced,
  title     = {{Balanced Off-Policy Evaluation in General Action Spaces}},
  author    = {Sondhi, Arjun and Arbour, David and Dimmery, Drew},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {2413-2423},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/sondhi2020aistats-balanced/}
}