Cascading Bandits: Learning to Rank in the Cascade Model

Abstract

A search engine usually outputs a list of K web pages. The user examines this list, from the first web page to the last, and chooses the first attractive page. This model of user behavior is known as the cascade model. In this paper, we propose cascading bandits, a learning variant of the cascade model where the objective is to identify K most attractive items. We formulate our problem as a stochastic combinatorial partial monitoring problem. We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We also prove gap-dependent upper bounds on the regret of these algorithms and derive a lower bound on the regret in cascading bandits. The lower bound matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We experiment with our algorithms on several problems. The algorithms perform surprisingly well even when our modeling assumptions are violated.

Cite

Text

Kveton et al. "Cascading Bandits: Learning to Rank in the Cascade Model." International Conference on Machine Learning, 2015.

Markdown

[Kveton et al. "Cascading Bandits: Learning to Rank in the Cascade Model." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/kveton2015icml-cascading/)

BibTeX

@inproceedings{kveton2015icml-cascading,
  title     = {{Cascading Bandits: Learning to Rank in the Cascade Model}},
  author    = {Kveton, Branislav and Szepesvari, Csaba and Wen, Zheng and Ashkan, Azin},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {767-776},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/kveton2015icml-cascading/}
}