Fixed-Budget Change Point Identification in Piecewise Constant Bandits

Abstract

We study the piecewise constant bandit problem where the expected reward is a piecewise constant function with one change point (discontinuity) across the action space $[0,1]$ and the learner’s aim is to locate the change point. Under the assumption of a fixed exploration budget, we provide the first non-asymptotic analysis of policies designed to locate abrupt changes in the mean reward function under bandit feedback. We study the problem under a large and small budget regime, and for both settings establish lower bounds on the error probability and provide algorithms with near matching upper bounds. Interestingly, our results show a separation in the complexity of the two regimes. We then propose a regime adaptive algorithm which is near optimal for both small and large budgets simultaneously. We complement our theoretical analysis with experimental results in simulated environments to support our findings.

Cite

Text

Lazzaro and Pike-Burke. "Fixed-Budget Change Point Identification in Piecewise Constant Bandits." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Lazzaro and Pike-Burke. "Fixed-Budget Change Point Identification in Piecewise Constant Bandits." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/lazzaro2025aistats-fixedbudget/)

BibTeX

@inproceedings{lazzaro2025aistats-fixedbudget,
  title     = {{Fixed-Budget Change Point Identification in Piecewise Constant Bandits}},
  author    = {Lazzaro, Joseph and Pike-Burke, Ciara},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {3268-3276},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/lazzaro2025aistats-fixedbudget/}
}