The K-Armed Dueling Bandits Problem

Abstract

We study a partial-information online-learning problem where actions are restricted to noisy comparisons between pairs of strategies (also known as bandits). In contrast to conventional approaches that require the absolute reward of the chosen strategy to be quantifiable and observable, our setting assumes only that (noisy) binary feedback about the relative reward of two chosen strategies is available. This type of relative feedback is particularly appropriate in applications where absolute rewards have no natural scale or are difficult to measure (e.g., user-perceived quality of a set of retrieval results, taste of food, product attractiveness), but where pairwise comparisons are easy to make. We propose a novel regret formulation in this setting, as well as present an algorithm that achieves information-theoretically optimal regret bounds (up to a constant factor).

Cite

Text

Yue et al. "The K-Armed Dueling Bandits Problem." Annual Conference on Computational Learning Theory, 2009. doi:10.1016/J.JCSS.2011.12.028

Markdown

[Yue et al. "The K-Armed Dueling Bandits Problem." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/yue2009colt-k/) doi:10.1016/J.JCSS.2011.12.028

BibTeX

@inproceedings{yue2009colt-k,
  title     = {{The K-Armed Dueling Bandits Problem}},
  author    = {Yue, Yisong and Broder, Josef and Kleinberg, Robert and Joachims, Thorsten},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2009},
  doi       = {10.1016/J.JCSS.2011.12.028},
  url       = {https://mlanthology.org/colt/2009/yue2009colt-k/}
}