Piecewise-Stationary Dueling Bandits
Abstract
We study the piecewise-stationary dueling bandits problem with $K$ arms, where the time horizon $T$ consists of $M$ stationary segments, each of which is associated with its own preference matrix. The learner repeatedly selects a pair of arms and observes a binary preference between them as feedback. To minimize the accumulated regret, the learner needs to pick the Condorcet winner of each stationary segment as often as possible, despite preference matrices and segment lengths being unknown. We propose the Beat the Winner Reset algorithm and prove a bound on its expected binary weak regret in the stationary case, which tightens the bound of current state-of-art algorithms. We also show a regret bound for the non-stationary case, without requiring knowledge of $M$ or $T$. We further propose and analyze two meta-algorithms, DETECT for weak regret and Monitored Dueling Bandits for strong regret, both based on a detection-window approach that can incorporate any dueling bandit algorithm as a black-box algorithm. Finally, we prove a worst-case lower bound for expected weak regret in the non-stationary case.
Cite
Text
Kolpaczki et al. "Piecewise-Stationary Dueling Bandits." Transactions on Machine Learning Research, 2024.Markdown
[Kolpaczki et al. "Piecewise-Stationary Dueling Bandits." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/kolpaczki2024tmlr-piecewisestationary/)BibTeX
@article{kolpaczki2024tmlr-piecewisestationary,
title = {{Piecewise-Stationary Dueling Bandits}},
author = {Kolpaczki, Patrick and Hüllermeier, Eyke and Bengs, Viktor},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/kolpaczki2024tmlr-piecewisestationary/}
}