Directional Optimism for Safe Linear Bandits

Abstract

The safe linear bandit problem is a version of the classical stochastic linear bandit problem where the learner’s actions must satisfy an uncertain constraint at all rounds. Due its applicability to many real-world settings, this problem has received considerable attention in recent years. By leveraging a novel approach that we call directional optimism, we find that it is possible to achieve improved regret guarantees for both well-separated problem instances and action sets that are finite star convex sets. Furthermore, we propose a novel algorithm for this setting that improves on existing algorithms in terms of empirical performance, while enjoying matching regret guarantees. Lastly, we introduce a generalization of the safe linear bandit setting where the constraints are convex and adapt our algorithms and analyses to this setting by leveraging a novel convex-analysis based approach.

Cite

Text

Hutchinson et al. "Directional Optimism for Safe Linear Bandits." Artificial Intelligence and Statistics, 2024.

Markdown

[Hutchinson et al. "Directional Optimism for Safe Linear Bandits." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/hutchinson2024aistats-directional/)

BibTeX

@inproceedings{hutchinson2024aistats-directional,
  title     = {{Directional Optimism for Safe Linear Bandits}},
  author    = {Hutchinson, Spencer and Turan, Berkay and Alizadeh, Mahnoosh},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {658-666},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/hutchinson2024aistats-directional/}
}