Improved Lower Bounds for First-Order Stochastic Non-Convex Optimization Under Markov Sampling
Abstract
Unlike its vanilla counterpart with i.i.d. samples, stochastic optimization with Markovian sampling allows the sampling scheme following a Markov chain. This problem encompasses various applications that range from asynchronous distributed optimization to reinforcement learning. In this work, we lower bound the sample complexity of finding $\epsilon$-approximate critical solutions for any first-order methods when sampling is Markovian. We show that for samples drawn from stationary Markov processes with countable state space, any algorithm that accesses smooth, non-convex functions through queries to a stochastic gradient oracle, requires at least $\Omega(\epsilon^{-4})$ samples. Moreover, for finite Markov chains, we show a $\Omega(\epsilon^{-2})$ lower bound and propose a new algorithm, called MaC-SAGE, that is proven to (nearly) match our lower bound.
Cite
Text
Sun and Wei. "Improved Lower Bounds for First-Order Stochastic Non-Convex Optimization Under Markov Sampling." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Sun and Wei. "Improved Lower Bounds for First-Order Stochastic Non-Convex Optimization Under Markov Sampling." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/sun2025icml-improved/)BibTeX
@inproceedings{sun2025icml-improved,
title = {{Improved Lower Bounds for First-Order Stochastic Non-Convex Optimization Under Markov Sampling}},
author = {Sun, Zhenyu and Wei, Ermin},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {57754-57772},
volume = {267},
url = {https://mlanthology.org/icml/2025/sun2025icml-improved/}
}