CUR from a Sparse Optimization Viewpoint
Abstract
The CUR decomposition provides an approximation of a matrix X that has low reconstruction error and that is sparse in the sense that the resulting approximation lies in the span of only a few columns of X. In this regard, it appears to be similar to many sparse PCA methods. However, CUR takes a randomized algorithmic approach whereas most sparse PCA methods are framed as convex optimization problems. In this paper, we try to understand CUR from a sparse optimization viewpoint. In particular, we show that CUR is implicitly optimizing a sparse regression objective and, furthermore, cannot be directly cast as a sparse PCA method. We observe that the sparsity attained by CUR possesses an interesting structure, which leads us to formulate a sparse PCA method that achieves a CUR-like sparsity.
Cite
Text
Bien et al. "CUR from a Sparse Optimization Viewpoint." Neural Information Processing Systems, 2010.Markdown
[Bien et al. "CUR from a Sparse Optimization Viewpoint." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/bien2010neurips-cur/)BibTeX
@inproceedings{bien2010neurips-cur,
title = {{CUR from a Sparse Optimization Viewpoint}},
author = {Bien, Jacob and Xu, Ya and Mahoney, Michael W.},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {217-225},
url = {https://mlanthology.org/neurips/2010/bien2010neurips-cur/}
}