Stochastic Methods for L1-Regularized Loss Minimization
Abstract
We describe and analyze two stochastic methods for l1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature or example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.
Cite
Text
Shalev-Shwartz and Tewari. "Stochastic Methods for L1-Regularized Loss Minimization." Journal of Machine Learning Research, 2011.Markdown
[Shalev-Shwartz and Tewari. "Stochastic Methods for L1-Regularized Loss Minimization." Journal of Machine Learning Research, 2011.](https://mlanthology.org/jmlr/2011/shalevshwartz2011jmlr-stochastic/)BibTeX
@article{shalevshwartz2011jmlr-stochastic,
title = {{Stochastic Methods for L1-Regularized Loss Minimization}},
author = {Shalev-Shwartz, Shai and Tewari, Ambuj},
journal = {Journal of Machine Learning Research},
year = {2011},
pages = {1865-1892},
volume = {12},
url = {https://mlanthology.org/jmlr/2011/shalevshwartz2011jmlr-stochastic/}
}