Counterfactual Learning with General Data-Generating Policies

Abstract

Off-policy evaluation (OPE) attempts to predict the performance of counterfactual policies using log data from a different policy. We extend its applicability by developing an OPE method for a class of both full support and deficient support logging policies in contextual-bandit settings. This class includes deterministic bandit (such as Upper Confidence Bound) as well as deterministic decision-making based on supervised and unsupervised learning. We prove that our method's prediction converges in probability to the true performance of a counterfactual policy as the sample size increases. We validate our method with experiments on partly and entirely deterministic logging policies. Finally, we apply it to evaluate coupon targeting policies by a major online platform and show how to improve the existing policy.

Cite

Text

Narita et al. "Counterfactual Learning with General Data-Generating Policies." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I8.26113

Markdown

[Narita et al. "Counterfactual Learning with General Data-Generating Policies." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/narita2023aaai-counterfactual/) doi:10.1609/AAAI.V37I8.26113

BibTeX

@inproceedings{narita2023aaai-counterfactual,
  title     = {{Counterfactual Learning with General Data-Generating Policies}},
  author    = {Narita, Yusuke and Okumura, Kyohei and Shimizu, Akihiro and Yata, Kohei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {9286-9293},
  doi       = {10.1609/AAAI.V37I8.26113},
  url       = {https://mlanthology.org/aaai/2023/narita2023aaai-counterfactual/}
}