HONOR: Hybrid Optimization for NOn-Convex Regularized Problems

Abstract

Recent years have witnessed the superiority of non-convex sparse learning formulations over their convex counterparts in both theory and practice. However, due to the non-convexity and non-smoothness of the regularizer, how to efficiently solve the non-convex optimization problem for large-scale data is still quite challenging. In this paper, we propose an efficient \underline{H}ybrid \underline{O}ptimization algorithm for \underline{NO}n convex \underline{R}egularized problems (HONOR). Specifically, we develop a hybrid scheme which effectively integrates a Quasi-Newton (QN) step and a Gradient Descent (GD) step. Our contributions are as follows: (1) HONOR incorporates the second-order information to greatly speed up the convergence, while it avoids solving a regularized quadratic programming and only involves matrix-vector multiplications without explicitly forming the inverse Hessian matrix. (2) We establish a rigorous convergence analysis for HONOR, which shows that convergence is guaranteed even for non-convex problems, while it is typically challenging to analyze the convergence for non-convex problems. (3) We conduct empirical studies on large-scale data sets and results demonstrate that HONOR converges significantly faster than state-of-the-art algorithms.

Cite

Text

Gong and Ye. "HONOR: Hybrid Optimization for NOn-Convex Regularized Problems." Neural Information Processing Systems, 2015.

Markdown

[Gong and Ye. "HONOR: Hybrid Optimization for NOn-Convex Regularized Problems." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/gong2015neurips-honor/)

BibTeX

@inproceedings{gong2015neurips-honor,
  title     = {{HONOR: Hybrid Optimization for NOn-Convex Regularized Problems}},
  author    = {Gong, Pinghua and Ye, Jieping},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {415-423},
  url       = {https://mlanthology.org/neurips/2015/gong2015neurips-honor/}
}