Nonparametric Bandits with Covariates

Abstract

We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.

Cite

Text

Rigollet and Zeevi. "Nonparametric Bandits with Covariates." Annual Conference on Computational Learning Theory, 2010.

Markdown

[Rigollet and Zeevi. "Nonparametric Bandits with Covariates." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/rigollet2010colt-nonparametric/)

BibTeX

@inproceedings{rigollet2010colt-nonparametric,
  title     = {{Nonparametric Bandits with Covariates}},
  author    = {Rigollet, Philippe and Zeevi, Assaf},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2010},
  pages     = {54-66},
  url       = {https://mlanthology.org/colt/2010/rigollet2010colt-nonparametric/}
}