Off-Policy Confidence Sequences

Abstract

We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting. These confidence sequences are based on recent ideas from martingale analysis and are non-asymptotic, non-parametric, and valid at arbitrary stopping times. We provide algorithms for computing these confidence sequences that strike a good balance between computational and statistical efficiency. We empirically demonstrate the tightness of our approach in terms of failure probability and width and apply it to the “gated deployment” problem of safely upgrading a production contextual bandit system.

Cite

Text

Karampatziakis et al. "Off-Policy Confidence Sequences." International Conference on Machine Learning, 2021.

Markdown

[Karampatziakis et al. "Off-Policy Confidence Sequences." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/karampatziakis2021icml-offpolicy/)

BibTeX

@inproceedings{karampatziakis2021icml-offpolicy,
  title     = {{Off-Policy Confidence Sequences}},
  author    = {Karampatziakis, Nikos and Mineiro, Paul and Ramdas, Aaditya},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5301-5310},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/karampatziakis2021icml-offpolicy/}
}