Optimal Stochastic and Online Learning with Individual Iterates

Abstract

Stochastic composite mirror descent (SCMD) is a simple and efficient method able to capture both geometric and composite structures of optimization problems in machine learning. Existing strategies require to take either an average or a random selection of iterates to achieve optimal convergence rates, which, however, can either destroy the sparsity of solutions or slow down the practical training speed. In this paper, we propose a theoretically sound strategy to select an individual iterate of the vanilla SCMD, which is able to achieve optimal rates for both convex and strongly convex problems in a non-smooth learning setting. This strategy of outputting an individual iterate can preserve the sparsity of solutions which is crucial for a proper interpretation in sparse learning problems. We report experimental comparisons with several baseline methods to show the effectiveness of our method in achieving a fast training speed as well as in outputting sparse solutions.

Cite

Text

Lei et al. "Optimal Stochastic and Online Learning with Individual Iterates." Neural Information Processing Systems, 2019.

Markdown

[Lei et al. "Optimal Stochastic and Online Learning with Individual Iterates." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/lei2019neurips-optimal/)

BibTeX

@inproceedings{lei2019neurips-optimal,
  title     = {{Optimal Stochastic and Online Learning with Individual Iterates}},
  author    = {Lei, Yunwen and Yang, Peng and Tang, Ke and Zhou, Ding-Xuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {5415-5425},
  url       = {https://mlanthology.org/neurips/2019/lei2019neurips-optimal/}
}