Sum-Max Submodular Bandits
Abstract
Many online decision-making problems correspond to maximizing a sequence of submodular functions. In this work, we introduce sum-max functions, a subclass of monotone submodular functions capturing several interesting problems, including best-of-$K$-bandits, combinatorial bandits, and the bandit versions on $M$-medians and hitting sets. We show that all functions in this class satisfy a key property that we call pseudo-concavity. This allows us to prove $\big(1 - \frac{1}{e}\big)$-regret bounds for bandit feedback in the nonstochastic setting of the order of $\sqrt{MKT}$ (ignoring log factors), where $T$ is the time horizon and $M$ is a cardinality constraint. This bound, attained by a simple and efficient algorithm, significantly improves on the $\widetilde{\mathcal{O}}\big(T^{2/3}\big)$ regret bound for online monotone submodular maximization with bandit feedback. We also extend our results to a bandit version of the facility location problem.
Cite
Text
Pasteris et al. "Sum-Max Submodular Bandits." ICML 2024 Workshops: RLControlTheory, 2024.Markdown
[Pasteris et al. "Sum-Max Submodular Bandits." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/pasteris2024icmlw-summax/)BibTeX
@inproceedings{pasteris2024icmlw-summax,
title = {{Sum-Max Submodular Bandits}},
author = {Pasteris, Stephen and Rumi, Alberto and Vitale, Fabio and Cesa-Bianchi, Nicolò},
booktitle = {ICML 2024 Workshops: RLControlTheory},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/pasteris2024icmlw-summax/}
}