Better Mini-Batch Algorithms via Accelerated Gradient Methods

Abstract

Mini-batch algorithms have recently received significant attention as a way to speed-up stochastic convex optimization problems. In this paper, we study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up. We propose a novel accelerated gradient algorithm, which deals with this deficiency, and enjoys a uniformly superior guarantee. We conclude our paper with experiments on real-world datasets, which validates our algorithm and substantiates our theoretical insights.

Cite

Text

Cotter et al. "Better Mini-Batch Algorithms via Accelerated Gradient Methods." Neural Information Processing Systems, 2011.

Markdown

[Cotter et al. "Better Mini-Batch Algorithms via Accelerated Gradient Methods." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/cotter2011neurips-better/)

BibTeX

@inproceedings{cotter2011neurips-better,
  title     = {{Better Mini-Batch Algorithms via Accelerated Gradient Methods}},
  author    = {Cotter, Andrew and Shamir, Ohad and Srebro, Nati and Sridharan, Karthik},
  booktitle = {Neural Information Processing Systems},
  year      = {2011},
  pages     = {1647-1655},
  url       = {https://mlanthology.org/neurips/2011/cotter2011neurips-better/}
}