On the Necessity of Adaptive Regularisation: Optimal Anytime Online Learning on $\boldsymbol{\ell_p}$-Balls
Abstract
We study online convex optimization on $\ell_p$-balls in $\mathbb{R}^d$ for $p > 2$. While always sub-linear, the optimal regret exhibits a shift between the high-dimensional setting ($d > T$), when the dimension $d$ is greater than the time horizon $T$ and the low-dimensional setting ($d \leq T$). We show that Follow-the-Regularised-Leader (FTRL) with time-varying regularisation which is adaptive to the dimension regime is anytime optimal for all dimension regimes. Motivated by this, we ask whether it is possible to obtain anytime optimality of FTRL with fixed non-adaptive regularisation. Our main result establishes that for separable regularisers, adaptivity in the regulariser is necessary, and that any fixed regulariser will be sub-optimal in one of the two dimension regimes. Finally, we provide lower bounds which rule out sub-linear regret bounds for the linear bandit problem in sufficiently high-dimension for all $\ell_p$-balls with $p \geq 1$.
Cite
Text
Johnson et al. "On the Necessity of Adaptive Regularisation: Optimal Anytime Online Learning on $\boldsymbol{\ell_p}$-Balls." Advances in Neural Information Processing Systems, 2025.Markdown
[Johnson et al. "On the Necessity of Adaptive Regularisation: Optimal Anytime Online Learning on $\boldsymbol{\ell_p}$-Balls." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/johnson2025neurips-necessity/)BibTeX
@inproceedings{johnson2025neurips-necessity,
title = {{On the Necessity of Adaptive Regularisation: Optimal Anytime Online Learning on $\boldsymbol{\ell_p}$-Balls}},
author = {Johnson, Emmeran and Martínez-Rubio, David and Pike-Burke, Ciara and Rebeschini, Patrick},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/johnson2025neurips-necessity/}
}