Multi-Stage Convex Relaxation for Learning with Sparse Regularization

Abstract

We study learning formulations with non-convex regularizaton that are natural for sparse linear models. There are two approaches to this problem: (1) Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. (2) Convex relaxation such as $L_1$-regularization that solves the problem under some conditions. However it often leads to sub-optimal sparsity in reality. This paper tries to remedy the above gap between theory and practice. In particular, we investigate a multi-stage convex relaxation scheme for solving problems with non-convex regularization. Theoretically, we analyze the behavior of a resulting two-stage relaxation scheme for the capped-$L_1$ regularization. Our performance bound shows that the procedure is superior to the standard $L_1$ convex relaxation for learning sparse targets. Experiments confirm the effectiveness of this method on some simulation and real data.

Cite

Text

Zhang. "Multi-Stage Convex Relaxation for Learning with Sparse Regularization." Neural Information Processing Systems, 2008.

Markdown

[Zhang. "Multi-Stage Convex Relaxation for Learning with Sparse Regularization." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/zhang2008neurips-multistage/)

BibTeX

@inproceedings{zhang2008neurips-multistage,
  title     = {{Multi-Stage Convex Relaxation for Learning with Sparse Regularization}},
  author    = {Zhang, Tong},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {1929-1936},
  url       = {https://mlanthology.org/neurips/2008/zhang2008neurips-multistage/}
}