StopWasting My Gradients: Practical SVRG

Abstract

We present and analyze several strategies for improving the performance ofstochastic variance-reduced gradient (SVRG) methods. We first show that theconvergence rate of these methods can be preserved under a decreasing sequenceof errors in the control variate, and use this to derive variants of SVRG that usegrowing-batch strategies to reduce the number of gradient calculations requiredin the early iterations. We further (i) show how to exploit support vectors to reducethe number of gradient computations in the later iterations, (ii) prove that thecommonly–used regularized SVRG iteration is justified and improves the convergencerate, (iii) consider alternate mini-batch selection strategies, and (iv) considerthe generalization error of the method.

Cite

Text

Harikandeh et al. "StopWasting My Gradients: Practical SVRG." Neural Information Processing Systems, 2015.

Markdown

[Harikandeh et al. "StopWasting My Gradients: Practical SVRG." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/harikandeh2015neurips-stopwasting/)

BibTeX

@inproceedings{harikandeh2015neurips-stopwasting,
  title     = {{StopWasting My Gradients: Practical SVRG}},
  author    = {Harikandeh, Reza Babanezhad and Ahmed, Mohamed Osama and Virani, Alim and Schmidt, Mark and Konečný, Jakub and Sallinen, Scott},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {2251-2259},
  url       = {https://mlanthology.org/neurips/2015/harikandeh2015neurips-stopwasting/}
}