Reducing Dueling Bandits to Cardinal Bandits
Abstract
We present algorithms for reducing the Dueling Bandits problem to the conventional (stochastic) Multi-Armed Bandits problem. The Dueling Bandits problem is an online model of learning with ordinal feedback of the form “A is preferred to B” (as opposed to cardinal feedback like “A has value 2.5”), giving it wide applicability in learning from implicit user feedback and revealed and stated preferences. In contrast to existing algorithms for the Dueling Bandits problem, our reductions – named \Doubler, \MultiSbm and \DoubleSbm – provide a generic schema for translating the extensive body of known results about conventional Multi-Armed Bandit algorithms to the Dueling Bandits setting. For \Doubler and \MultiSbm we prove regret upper bounds in both finite and infinite settings, and conjecture about the performance of \DoubleSbm which empirically outperforms the other two as well as previous algorithms in our experiments. In addition, we provide the first almost optimal regret bound in terms of second order terms, such as the differences between the values of the arms.
Cite
Text
Ailon et al. "Reducing Dueling Bandits to Cardinal Bandits." International Conference on Machine Learning, 2014.Markdown
[Ailon et al. "Reducing Dueling Bandits to Cardinal Bandits." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/ailon2014icml-reducing/)BibTeX
@inproceedings{ailon2014icml-reducing,
title = {{Reducing Dueling Bandits to Cardinal Bandits}},
author = {Ailon, Nir and Karnin, Zohar and Joachims, Thorsten},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {856-864},
volume = {32},
url = {https://mlanthology.org/icml/2014/ailon2014icml-reducing/}
}