Structured Sparsity via Alternating Direction Methods

Abstract

We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsity-inducing norm that incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the non-smoothness and non-separability of the regularization term. In this paper, we focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l∞-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require O(1/√ε) iterations to obtain an ε-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms.

Cite

Text

Qin and Goldfarb. "Structured Sparsity via Alternating Direction Methods." Journal of Machine Learning Research, 2012.

Markdown

[Qin and Goldfarb. "Structured Sparsity via Alternating Direction Methods." Journal of Machine Learning Research, 2012.](https://mlanthology.org/jmlr/2012/qin2012jmlr-structured/)

BibTeX

@article{qin2012jmlr-structured,
  title     = {{Structured Sparsity via Alternating Direction Methods}},
  author    = {Qin, Zhiwei and Goldfarb, Donald},
  journal   = {Journal of Machine Learning Research},
  year      = {2012},
  pages     = {1435-1468},
  volume    = {13},
  url       = {https://mlanthology.org/jmlr/2012/qin2012jmlr-structured/}
}