Geometric Descent Method for Convex Composite Minimization
Abstract
In this paper, we extend the geometric descent method recently proposed by Bubeck, Lee and Singh to tackle nonsmooth and strongly convex composite problems. We prove that our proposed algorithm, dubbed geometric proximal gradient method (GeoPG), converges with a linear rate $(1-1/\sqrt{\kappa})$ and thus achieves the optimal rate among first-order methods, where $\kappa$ is the condition number of the problem. Numerical results on linear regression and logistic regression with elastic net regularization show that GeoPG compares favorably with Nesterov's accelerated proximal gradient method, especially when the problem is ill-conditioned.
Cite
Text
Chen et al. "Geometric Descent Method for Convex Composite Minimization." Neural Information Processing Systems, 2017.Markdown
[Chen et al. "Geometric Descent Method for Convex Composite Minimization." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/chen2017neurips-geometric/)BibTeX
@inproceedings{chen2017neurips-geometric,
title = {{Geometric Descent Method for Convex Composite Minimization}},
author = {Chen, Shixiang and Ma, Shiqian and Liu, Wei},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {636-644},
url = {https://mlanthology.org/neurips/2017/chen2017neurips-geometric/}
}