Stochastic Methods for L1 Regularized Loss Minimization

Abstract

We describe and analyze two stochastic methods for $\ell_1$ regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.

Cite

Text

Shalev-Shwartz and Tewari. "Stochastic Methods for L1 Regularized Loss Minimization." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553493

Markdown

[Shalev-Shwartz and Tewari. "Stochastic Methods for L1 Regularized Loss Minimization." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/shalevshwartz2009icml-stochastic/) doi:10.1145/1553374.1553493

BibTeX

@inproceedings{shalevshwartz2009icml-stochastic,
  title     = {{Stochastic Methods for L1 Regularized Loss Minimization}},
  author    = {Shalev-Shwartz, Shai and Tewari, Ambuj},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {929-936},
  doi       = {10.1145/1553374.1553493},
  url       = {https://mlanthology.org/icml/2009/shalevshwartz2009icml-stochastic/}
}