Open Problem: Tight Online Confidence Intervals for RKHS Elements
Abstract
Confidence intervals are a crucial building block in the analysis of various online learning problems. The analysis of kernel-based bandit and reinforcement learning problems utilize confidence intervals applicable to the elements of a reproducing kernel Hilbert space (RKHS). However, the existing confidence bounds do not appear to be tight, resulting in suboptimal regret bounds. In fact, the existing regret bounds for several kernelized bandit algorithms (e.g., GP-UCB, GP-TS, and their variants) may fail to even be sublinear. It is unclear whether the suboptimal regret bound is a fundamental shortcoming of these algorithms or an artifact of the proof, and the main challenge seems to stem from the online (sequential) nature of the observation points. We formalize the question of online confidence intervals in the RKHS setting and overview the existing results.
Cite
Text
Vakili et al. "Open Problem: Tight Online Confidence Intervals for RKHS Elements." Conference on Learning Theory, 2021.Markdown
[Vakili et al. "Open Problem: Tight Online Confidence Intervals for RKHS Elements." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/vakili2021colt-open/)BibTeX
@inproceedings{vakili2021colt-open,
title = {{Open Problem: Tight Online Confidence Intervals for RKHS Elements}},
author = {Vakili, Sattar and Scarlett, Jonathan and Javidi, Tara},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {4647-4652},
volume = {134},
url = {https://mlanthology.org/colt/2021/vakili2021colt-open/}
}