Semiparametric Contextual Bandits

Abstract

This paper studies semiparametric contextual bandits, a generalization of the linear stochastic bandit problem where the reward for a chosen action is modeled as a linear function of known action features confounded by a non-linear action-independent term. We design new algorithms that achieve $\tilde{O}(d\sqrt{T})$ regret over $T$ rounds, when the linear function is $d$-dimensional, which matches the best known bounds for the simpler unconfounded case and improves on a recent result of Greenwald et al. (2017). Via an empirical evaluation, we show that our algorithms outperform prior approaches when there are non-linear confounding effects on the rewards. Technically, our algorithms use a new reward estimator inspired by doubly-robust approaches and our proofs require new concentration inequalities for self-normalized martingales.

Cite

Text

Krishnamurthy et al. "Semiparametric Contextual Bandits." International Conference on Machine Learning, 2018.

Markdown

[Krishnamurthy et al. "Semiparametric Contextual Bandits." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/krishnamurthy2018icml-semiparametric/)

BibTeX

@inproceedings{krishnamurthy2018icml-semiparametric,
  title     = {{Semiparametric Contextual Bandits}},
  author    = {Krishnamurthy, Akshay and Wu, Zhiwei Steven and Syrgkanis, Vasilis},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {2776-2785},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/krishnamurthy2018icml-semiparametric/}
}