Pessimism for Offline Linear Contextual Bandits Using $\ell_p$ Confidence Sets

Abstract

We present a family $\{\widehat{\pi}_p\}_{p\ge 1}$ of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different $\ell_p$ norms, where $\widehat{\pi}_2$ corresponds to Bellman-consistent pessimism (BCP), while $\widehat{\pi}_\infty$ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel $\widehat{\pi}_\infty$ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all $\ell_q$-constrained problems, and as such it strictly dominates all other predictors in the family, including $\widehat{\pi}_2$.

Cite

Text

Li et al. "Pessimism for Offline Linear Contextual Bandits Using $\ell_p$ Confidence Sets." Neural Information Processing Systems, 2022.

Markdown

[Li et al. "Pessimism for Offline Linear Contextual Bandits Using $\ell_p$ Confidence Sets." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-pessimism/)

BibTeX

@inproceedings{li2022neurips-pessimism,
  title     = {{Pessimism for Offline Linear Contextual Bandits Using $\ell_p$ Confidence Sets}},
  author    = {Li, Gene and Ma, Cong and Srebro, Nati},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/li2022neurips-pessimism/}
}