Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Abstract
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
Cite
Text
Shalev-Shwartz and Zhang. "Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization." Journal of Machine Learning Research, 2013.Markdown
[Shalev-Shwartz and Zhang. "Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization." Journal of Machine Learning Research, 2013.](https://mlanthology.org/jmlr/2013/shalevshwartz2013jmlr-stochastic/)BibTeX
@article{shalevshwartz2013jmlr-stochastic,
title = {{Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization}},
author = {Shalev-Shwartz, Shai and Zhang, Tong},
journal = {Journal of Machine Learning Research},
year = {2013},
pages = {567-599},
volume = {14},
url = {https://mlanthology.org/jmlr/2013/shalevshwartz2013jmlr-stochastic/}
}