The Price of Adaptivity in Stochastic Convex Optimization
Abstract
We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.
Cite
Text
Carmon and Hinder. "The Price of Adaptivity in Stochastic Convex Optimization." Conference on Learning Theory, 2024.Markdown
[Carmon and Hinder. "The Price of Adaptivity in Stochastic Convex Optimization." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/carmon2024colt-price/)BibTeX
@inproceedings{carmon2024colt-price,
title = {{The Price of Adaptivity in Stochastic Convex Optimization}},
author = {Carmon, Yair and Hinder, Oliver},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {772-774},
volume = {247},
url = {https://mlanthology.org/colt/2024/carmon2024colt-price/}
}