Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
Abstract
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
Cite
Text
Shalev-Shwartz and Zhang. "Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization." International Conference on Machine Learning, 2014.Markdown
[Shalev-Shwartz and Zhang. "Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/shalevshwartz2014icml-accelerated/)BibTeX
@inproceedings{shalevshwartz2014icml-accelerated,
title = {{Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization}},
author = {Shalev-Shwartz, Shai and Zhang, Tong},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {64-72},
volume = {32},
url = {https://mlanthology.org/icml/2014/shalevshwartz2014icml-accelerated/}
}