Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond

Abstract

We study convergence lower bounds of without-replacement stochastic gradient descent (SGD) for solving smooth (strongly-)convex finite-sum minimization problems. Unlike most existing results focusing on final iterate lower bounds in terms of the number of components $n$ and the number of epochs $K$, we seek bounds for arbitrary weighted average iterates that are tight in all factors including the condition number $\kappa$. For SGD with Random Reshuffling, we present lower bounds that have tighter $\kappa$ dependencies than existing bounds. Our results are the first to perfectly close the gap between lower and upper bounds for weighted average iterates in both strongly-convex and convex cases. We also prove weighted average iterate lower bounds for arbitrary permutation-based SGD, which apply to all variants that carefully choose the best permutation. Our bounds improve the existing bounds in factors of $n$ and $\kappa$ and thereby match the upper bounds shown for a recently proposed algorithm called GraB.

Cite

Text

Cha et al. "Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond." International Conference on Machine Learning, 2023.

Markdown

[Cha et al. "Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/cha2023icml-tighter/)

BibTeX

@inproceedings{cha2023icml-tighter,
  title     = {{Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond}},
  author    = {Cha, Jaeyoung and Lee, Jaewook and Yun, Chulhee},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {3855-3912},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/cha2023icml-tighter/}
}