Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms and Analysis
Abstract
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes under the popular $\delta$-similarity and $\mu$-strong convexity conditions. We propose two new algorithms, SVRS and AccSVRS, motivated by previous works. The non-accelerated SVRS method combines the techniques of gradient sliding and variance reduction and achieves a better communication complexity of $\tilde{\mathcal{O}}(n {+} \sqrt{n}\delta/\mu)$ compared to existing non-accelerated algorithms. Applying the framework proposed in Katyusha X, we also develop a directly accelerated version named AccSVRS with the $\tilde{\mathcal{O}}(n {+} n^{3/4}\sqrt{\delta/\mu})$ communication complexity. In contrast to existing results, our complexity bounds are entirely smoothness-free and exhibit superiority in ill-conditioned cases. Furthermore, we establish a nearly matched lower bound to verify the tightness of our AccSVRS method.
Cite
Text
Lin et al. "Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms and Analysis." Neural Information Processing Systems, 2023.Markdown
[Lin et al. "Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms and Analysis." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/lin2023neurips-stochastic/)BibTeX
@inproceedings{lin2023neurips-stochastic,
title = {{Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms and Analysis}},
author = {Lin, Dachao and Han, Yuze and Ye, Haishan and Zhang, Zhihua},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/lin2023neurips-stochastic/}
}