An Accelerated Proximal Coordinate Gradient Method
Abstract
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized empirical risk minimization (ERM) problem, and devise efficient implementations that can avoid full-dimensional vector operations. For ill-conditioned ERM problems, our method obtains improved convergence rates than the state-of-the-art stochastic dual coordinate ascent (SDCA) method.
Cite
Text
Lin et al. "An Accelerated Proximal Coordinate Gradient Method." Neural Information Processing Systems, 2014.Markdown
[Lin et al. "An Accelerated Proximal Coordinate Gradient Method." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/lin2014neurips-accelerated/)BibTeX
@inproceedings{lin2014neurips-accelerated,
title = {{An Accelerated Proximal Coordinate Gradient Method}},
author = {Lin, Qihang and Lu, Zhaosong and Xiao, Lin},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {3059-3067},
url = {https://mlanthology.org/neurips/2014/lin2014neurips-accelerated/}
}