Minimax Bounds on Stochastic Batched Convex Optimization
Abstract
We study the stochastic batched convex optimization problem, in which we use many \emph{parallel} observations to optimize a convex function given limited rounds of interaction. In each of $M$ rounds, an algorithm may query for information at $n$ points, and after issuing all $n$ queries, it receives unbiased noisy function and/or (sub)gradient evaluations at the $n$ points. After $M$ such rounds, the algorithm must output an estimator. We provide lower and upper bounds on the performance of such batched convex optimization algorithms in zeroth and first-order settings for Lipschitz convex and smooth strongly convex functions. Our rates of convergence (nearly) achieve the fully sequential rate once $M = O(d \log \log n)$, where $d$ is the problem dimension, but the rates may exponentially degrade as the dimension $d$ increases, in distinction from fully sequential settings.
Cite
Text
Duchi et al. "Minimax Bounds on Stochastic Batched Convex Optimization." Annual Conference on Computational Learning Theory, 2018.Markdown
[Duchi et al. "Minimax Bounds on Stochastic Batched Convex Optimization." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/duchi2018colt-minimax/)BibTeX
@inproceedings{duchi2018colt-minimax,
title = {{Minimax Bounds on Stochastic Batched Convex Optimization}},
author = {Duchi, John C. and Ruan, Feng and Yun, Chulhee},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {3065-3162},
url = {https://mlanthology.org/colt/2018/duchi2018colt-minimax/}
}