Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem

Abstract

We study the $K$-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms. We introduce a tight asymptotic regret lower bound that is based on the information divergence. An algorithm that is inspired by the Deterministic Minimum Empirical Divergence algorithm (Honda and Takemura, 2010) is proposed, and its regret is analyzed. The proposed algorithm is found to be the first one with a regret upper bound that matches the lower bound. Experimental comparisons of dueling bandit algorithms show that the proposed algorithm significantly outperforms existing ones.

Cite

Text

Komiyama et al. "Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem." Annual Conference on Computational Learning Theory, 2015.

Markdown

[Komiyama et al. "Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem." Annual Conference on Computational Learning Theory, 2015.](https://mlanthology.org/colt/2015/komiyama2015colt-regret/)

BibTeX

@inproceedings{komiyama2015colt-regret,
  title     = {{Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem}},
  author    = {Komiyama, Junpei and Honda, Junya and Kashima, Hisashi and Nakagawa, Hiroshi},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2015},
  pages     = {1141-1154},
  url       = {https://mlanthology.org/colt/2015/komiyama2015colt-regret/}
}