Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation

Abstract

We analyze the convergence behaviour of a recently proposed algorithm for regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is based on a new interpretation of DAL as a proximal minimization algorithm. We theoretically show under some conditions that DAL converges super-linearly in a non-asymptotic and global sense. Due to a special modelling of sparse estimation problems in the context of machine learning, the assumptions we make are milder and more natural than those made in conventional analysis of augmented Lagrangian algorithms. In addition, the new interpretation enables us to generalize DAL to wide varieties of sparse estimation problems. We experimentally confirm our analysis in a large scale l1-regularized logistic regression problem and extensively compare the efficiency of DAL algorithm to previously proposed algorithms on both synthetic and benchmark data sets.

Cite

Text

Tomioka et al. "Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation." Journal of Machine Learning Research, 2011.

Markdown

[Tomioka et al. "Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation." Journal of Machine Learning Research, 2011.](https://mlanthology.org/jmlr/2011/tomioka2011jmlr-superlinear/)

BibTeX

@article{tomioka2011jmlr-superlinear,
  title     = {{Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation}},
  author    = {Tomioka, Ryota and Suzuki, Taiji and Sugiyama, Masashi},
  journal   = {Journal of Machine Learning Research},
  year      = {2011},
  pages     = {1537-1586},
  volume    = {12},
  url       = {https://mlanthology.org/jmlr/2011/tomioka2011jmlr-superlinear/}
}