Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems
Abstract
We consider convex-concave saddle point problems with a separable structure and non-strongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primal-dual updates, which enables flexible parallel optimization for large-scale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of primal-dual methods and utilizing the structure of the separable convex-concave saddle point problem. It is capable of solving a wide range of machine learning applications, including robust principal component analysis, Lasso, and feature selection by group Lasso, etc. Theoretically and empirically, we demonstrate significantly better performance than state-of-the-art methods in all these applications.
Cite
Text
Zhu and Storkey. "Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10188Markdown
[Zhu and Storkey. "Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/zhu2016aaai-stochastic/) doi:10.1609/AAAI.V30I1.10188BibTeX
@inproceedings{zhu2016aaai-stochastic,
title = {{Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems}},
author = {Zhu, Zhanxing and Storkey, Amos J.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {2429-2437},
doi = {10.1609/AAAI.V30I1.10188},
url = {https://mlanthology.org/aaai/2016/zhu2016aaai-stochastic/}
}