On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

Abstract

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGM is in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm-regularized problem, which may be of independent interest.

Cite

Text

Hou et al. "On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization." Neural Information Processing Systems, 2013.

Markdown

[Hou et al. "On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/hou2013neurips-linear/)

BibTeX

@inproceedings{hou2013neurips-linear,
  title     = {{On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization}},
  author    = {Hou, Ke and Zhou, Zirui and So, Anthony Man-Cho and Luo, Zhi-Quan},
  booktitle = {Neural Information Processing Systems},
  year      = {2013},
  pages     = {710-718},
  url       = {https://mlanthology.org/neurips/2013/hou2013neurips-linear/}
}