Learning to Plan Variable Length Sequences of Actions with a Cascading Bandit Click Model of User Feedback

Abstract

Motivated by problems of ranking with partial information, we introduce a variant of the cascading bandit model that considers flexible length sequences with varying rewards and losses. We formulate two generative models for this problem within the generalized linear setting, and design and analyze upper confidence algorithms for it. Our analysis delivers tight regret bounds which, when specialized to standard cascading bandits, results in sharper guarantees than previously available in the literature. We evaluate our algorithms against a representative sample of cascading bandit baselines on a number of real-world datasets and show significantly improved empirical performance.

Cite

Text

Santara et al. "Learning to Plan Variable Length Sequences of Actions with a Cascading Bandit Click Model of User Feedback." Artificial Intelligence and Statistics, 2022.

Markdown

[Santara et al. "Learning to Plan Variable Length Sequences of Actions with a Cascading Bandit Click Model of User Feedback." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/santara2022aistats-learning/)

BibTeX

@inproceedings{santara2022aistats-learning,
  title     = {{Learning to Plan Variable Length Sequences of Actions with a Cascading Bandit Click Model of User Feedback}},
  author    = {Santara, Anirban and Aggarwal, Gaurav and Li, Shuai and Gentile, Claudio},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {767-797},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/santara2022aistats-learning/}
}