Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization

Abstract

We show that stochastic acceleration can be achieved under the perturbed iterate framework (Mania et al., 2017) in asynchronous lock-free optimization, which leads to the optimal incremental gradient complexity for finite-sum objectives. We prove that our new accelerated method requires the same linear speed-up condition as existing non-accelerated methods. Our key algorithmic discovery is a new accelerated SVRG variant with sparse updates. Empirical results are presented to verify our theoretical findings.

Cite

Text

Zhou et al. "Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization." NeurIPS 2022 Workshops: OPT, 2022.

Markdown

[Zhou et al. "Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/zhou2022neuripsw-accelerating/)

BibTeX

@inproceedings{zhou2022neuripsw-accelerating,
  title     = {{Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization}},
  author    = {Zhou, Kaiwen and So, Anthony Man-Cho and Cheng, James},
  booktitle = {NeurIPS 2022 Workshops: OPT},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/zhou2022neuripsw-accelerating/}
}