Tracking Most Significant Shifts in Infinite-Armed Bandits

Abstract

We study an infinite-armed bandit problem where actions’ mean rewards are initially sampled from a reservoir distribution. Most prior works in this setting focused on stationary rewards (Berry et al., 1997; Wang et al., 2008; Bonald and Proutiere, 2013; Carpentier and Valko, 2015) with the more challenging adversarial/non-stationary variant only recently studied in the context of rotting/decreasing rewards (Kim et al., 2022; 2024). Furthermore, optimal regret upper bounds were only achieved using parameter knowledge of non-stationarity and only known for certain regimes of regularity of the reservoir. This work shows the first parameter-free optimal regret bounds while also relaxing these distributional assumptions. We also study a natural notion of significant shift for this problem inspired by recent developments in finite-armed MAB (Suk & Kpotufe, 2022). We show that tighter regret bounds in terms of significant shifts can be adaptively attained. Our enhanced rates only depend on the rotting non-stationarity and thus exhibit an interesting phenomenon for this problem where rising non-stationarity does not factor into the difficulty of non-stationarity.

Cite

Text

Suk and Kim. "Tracking Most Significant Shifts in Infinite-Armed Bandits." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Suk and Kim. "Tracking Most Significant Shifts in Infinite-Armed Bandits." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/suk2025icml-tracking/)

BibTeX

@inproceedings{suk2025icml-tracking,
  title     = {{Tracking Most Significant Shifts in Infinite-Armed Bandits}},
  author    = {Suk, Joe and Kim, Jung-Hun},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {57311-57335},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/suk2025icml-tracking/}
}