Efficient Counterfactual Learning from Bandit Feedback
Abstract
What is the most statistically efficient way to do off-policy optimization with batch data from bandit feedback? For log data generated by contextual bandit algorithms, we consider offline estimators for the expected reward from a counterfactual policy. Our estimators are shown to have lowest variance in a wide class of estimators, achieving variance reduction relative to standard estimators. We then apply our estimators to improve advertisement design by a major advertisement company. Consistent with the theoretical result, our estimators allow us to improve on the existing bandit algorithm with more statistical confidence compared to a state-of-theart benchmark.
Cite
Text
Narita et al. "Efficient Counterfactual Learning from Bandit Feedback." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33014634Markdown
[Narita et al. "Efficient Counterfactual Learning from Bandit Feedback." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/narita2019aaai-efficient/) doi:10.1609/AAAI.V33I01.33014634BibTeX
@inproceedings{narita2019aaai-efficient,
title = {{Efficient Counterfactual Learning from Bandit Feedback}},
author = {Narita, Yusuke and Yasui, Shota and Yata, Kohei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {4634-4641},
doi = {10.1609/AAAI.V33I01.33014634},
url = {https://mlanthology.org/aaai/2019/narita2019aaai-efficient/}
}