\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods
Abstract
Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.
Cite
Text
Zhou et al. "\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods." International Conference on Machine Learning, 2015.Markdown
[Zhou et al. "\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/zhou2015icml-pnorm/)BibTeX
@inproceedings{zhou2015icml-pnorm,
title = {{\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods}},
author = {Zhou, Zirui and Zhang, Qi and So, Anthony Man-Cho},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {1501-1510},
volume = {37},
url = {https://mlanthology.org/icml/2015/zhou2015icml-pnorm/}
}