ASAGA: Asynchronous Parallel SAGA

Abstract

We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced "perturbed iterate" framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.

Cite

Text

Leblond et al. "ASAGA: Asynchronous Parallel SAGA." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Leblond et al. "ASAGA: Asynchronous Parallel SAGA." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/leblond2017aistats-asaga/)

BibTeX

@inproceedings{leblond2017aistats-asaga,
  title     = {{ASAGA: Asynchronous Parallel SAGA}},
  author    = {Leblond, Rémi and Pedregosa, Fabian and Lacoste-Julien, Simon},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {46-54},
  url       = {https://mlanthology.org/aistats/2017/leblond2017aistats-asaga/}
}