Parametric Simplex Method for Sparse Learning

Abstract

High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we investiage a broad class of sparse learning approaches formulated as linear programs parametrized by a {\em regularization factor}, and solve them by the parametric simplex method (PSM). PSM offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, sparse support vector machine for sparse linear classification, and sparse differential network estimation. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.

Cite

Text

Pang et al. "Parametric Simplex Method for Sparse Learning." Neural Information Processing Systems, 2017.

Markdown

[Pang et al. "Parametric Simplex Method for Sparse Learning." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/pang2017neurips-parametric/)

BibTeX

@inproceedings{pang2017neurips-parametric,
  title     = {{Parametric Simplex Method for Sparse Learning}},
  author    = {Pang, Haotian and Liu, Han and Vanderbei, Robert J and Zhao, Tuo},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {188-197},
  url       = {https://mlanthology.org/neurips/2017/pang2017neurips-parametric/}
}