Sparse Recovery by Thresholded Non-Negative Least Squares
Abstract
Non-negative data are commonly encountered in numerous fields, making non-negative least squares regression (NNLS) a frequently used tool. At least relative to its simplicity, it often performs rather well in practice. Serious doubts about its usefulness arise for modern high-dimensional linear models. Even in this setting - unlike first intuition may suggest - we show that for a broad class of designs, NNLS is resistant to overfitting and works excellently for sparse recovery when combined with thresholding, experimentally even outperforming L1-regularization. Since NNLS also circumvents the delicate choice of a regularization parameter, our findings suggest that NNLS may be the method of choice.
Cite
Text
Slawski and Hein. "Sparse Recovery by Thresholded Non-Negative Least Squares." Neural Information Processing Systems, 2011.Markdown
[Slawski and Hein. "Sparse Recovery by Thresholded Non-Negative Least Squares." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/slawski2011neurips-sparse/)BibTeX
@inproceedings{slawski2011neurips-sparse,
title = {{Sparse Recovery by Thresholded Non-Negative Least Squares}},
author = {Slawski, Martin and Hein, Matthias},
booktitle = {Neural Information Processing Systems},
year = {2011},
pages = {1926-1934},
url = {https://mlanthology.org/neurips/2011/slawski2011neurips-sparse/}
}