Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Abstract
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variables. An extrapolation step on the primal variables is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.
Cite
Text
Zhang and Xiao. "Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization." Journal of Machine Learning Research, 2017.Markdown
[Zhang and Xiao. "Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/zhang2017jmlr-stochastic/)BibTeX
@article{zhang2017jmlr-stochastic,
title = {{Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization}},
author = {Zhang, Yuchen and Xiao, Lin},
journal = {Journal of Machine Learning Research},
year = {2017},
pages = {1-42},
volume = {18},
url = {https://mlanthology.org/jmlr/2017/zhang2017jmlr-stochastic/}
}