Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

Abstract

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.

Cite

Text

Zhang and Lin. "Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization." International Conference on Machine Learning, 2015.

Markdown

[Zhang and Lin. "Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/zhang2015icml-stochastic/)

BibTeX

@inproceedings{zhang2015icml-stochastic,
  title     = {{Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization}},
  author    = {Zhang, Yuchen and Lin, Xiao},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {353-361},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/zhang2015icml-stochastic/}
}