Empirical Likelihood for Contextual Bandits
Abstract
We propose an estimator and confidence interval for computing the value of a policy from off-policy data in the contextual bandit setting. To this end we apply empirical likelihood techniques to formulate our estimator and confidence interval as simple convex optimization problems. Using the lower bound of our confidence interval, we then propose an off-policy policy optimization algorithm that searches for policies with large reward lower bound. We empirically find that both our estimator and confidence interval improve over previous proposals in finite sample regimes. Finally, the policy optimization algorithm we propose outperforms a strong baseline system for learning from off-policy data.
Cite
Text
Karampatziakis et al. "Empirical Likelihood for Contextual Bandits." Neural Information Processing Systems, 2020.Markdown
[Karampatziakis et al. "Empirical Likelihood for Contextual Bandits." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/karampatziakis2020neurips-empirical/)BibTeX
@inproceedings{karampatziakis2020neurips-empirical,
title = {{Empirical Likelihood for Contextual Bandits}},
author = {Karampatziakis, Nikos and Langford, John and Mineiro, Paul},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/karampatziakis2020neurips-empirical/}
}