UniRank: Unimodal Bandit Algorithms for Online Ranking
Abstract
We tackle, in the multiple-play bandit setting, the online ranking problem of assigning L items to K predefined positions on a web page in order to maximize the number of user clicks. We propose a generic algorithm, UniRank, that tackles state-of-the-art click models. The regret bound of this algorithm is a direct consequence of the pseudo-unimodality property of the bandit setting with respect to a graph where nodes are ordered sets of indistinguishable items. The main contribution of UniRank is its O(L/$\Delta$ logT) regret for T consecutive assignments, where $\Delta$ relates to the reward-gap between two items. This regret bound is based on the usually implicit condition that two items may not have the same attractiveness. Experiments against state-of-the-art learning algorithms specialized or not for different click models, show that our method has better regret performance than other generic algorithms on real life and synthetic datasets.
Cite
Text
Gauthier et al. "UniRank: Unimodal Bandit Algorithms for Online Ranking." International Conference on Machine Learning, 2022.Markdown
[Gauthier et al. "UniRank: Unimodal Bandit Algorithms for Online Ranking." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/gauthier2022icml-unirank/)BibTeX
@inproceedings{gauthier2022icml-unirank,
title = {{UniRank: Unimodal Bandit Algorithms for Online Ranking}},
author = {Gauthier, Camille-Sovanneary and Gaudel, Romaric and Fromont, Elisa},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {7279-7309},
volume = {162},
url = {https://mlanthology.org/icml/2022/gauthier2022icml-unirank/}
}