Accelerated Training for Matrix-Norm Regularization: A Boosting Approach

Abstract

Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm. Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrix-norm constrained problems or provide suboptimal convergence rates. In this paper, we propose a boosting method for regularized learning that guarantees $\epsilon$ accuracy within $O(1/\epsilon)$ iterations. Performance is further accelerated by interlacing boosting with fixed-rank local optimization---exploiting a simpler local objective than previous work. The proposed method yields state-of-the-art performance on large-scale problems. We also demonstrate an application to latent multiview learning for which we provide the first efficient weak-oracle.

Cite

Text

Zhang et al. "Accelerated Training for Matrix-Norm Regularization: A Boosting Approach." Neural Information Processing Systems, 2012.

Markdown

[Zhang et al. "Accelerated Training for Matrix-Norm Regularization: A Boosting Approach." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/zhang2012neurips-accelerated/)

BibTeX

@inproceedings{zhang2012neurips-accelerated,
  title     = {{Accelerated Training for Matrix-Norm Regularization: A Boosting Approach}},
  author    = {Zhang, Xinhua and Schuurmans, Dale and Yu, Yao-liang},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {2906-2914},
  url       = {https://mlanthology.org/neurips/2012/zhang2012neurips-accelerated/}
}