A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization

Abstract

We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.

Cite

Text

Lan et al. "A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization." Neural Information Processing Systems, 2019.

Markdown

[Lan et al. "A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/lan2019neurips-unified/)

BibTeX

@inproceedings{lan2019neurips-unified,
  title     = {{A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization}},
  author    = {Lan, Guanghui and Li, Zhize and Zhou, Yi},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {10462-10472},
  url       = {https://mlanthology.org/neurips/2019/lan2019neurips-unified/}
}