Off-Policy Evaluation with Out-of-Sample Guarantees
Abstract
We consider the problem of evaluating the performance of a decision policy using past observational data. The outcome of a policy is measured in terms of a loss (aka. disutility or negative reward) and the main problem is making valid inferences about its out-of-sample loss when the past data was observed under a different and possibly unknown policy. Using a sample-splitting method, we show that it is possible to draw such inferences with finite-sample coverage guarantees about the entire loss distribution, rather than just its mean. Importantly, the method takes into account model misspecifications of the past policy - including unmeasured confounding. The evaluation method can be used to certify the performance of a policy using observational data under a specified range of credible model assumptions.
Cite
Text
Ek et al. "Off-Policy Evaluation with Out-of-Sample Guarantees." Transactions on Machine Learning Research, 2023.Markdown
[Ek et al. "Off-Policy Evaluation with Out-of-Sample Guarantees." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/ek2023tmlr-offpolicy/)BibTeX
@article{ek2023tmlr-offpolicy,
title = {{Off-Policy Evaluation with Out-of-Sample Guarantees}},
author = {Ek, Sofia and Zachariah, Dave and Johansson, Fredrik D. and Stoica, Peter},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/ek2023tmlr-offpolicy/}
}