Sparse Projections onto the Simplex
Abstract
Most learning methods with rank or sparsity constraints use convex relaxations, which lead to optimization with the nuclear norm or the \ell_1-norm. However, several important learning applications cannot benefit from this approach as they feature these convex norms as constraints in addition to the non-convex rank and sparsity constraints. In this setting, we derive efficient sparse projections onto the simplex and its extension, and illustrate how to use them to solve high-dimensional learning problems in quantum tomography, sparse density estimation and portfolio selection with non-convex constraints.
Cite
Text
Kyrillidis et al. "Sparse Projections onto the Simplex." International Conference on Machine Learning, 2013.Markdown
[Kyrillidis et al. "Sparse Projections onto the Simplex." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/kyrillidis2013icml-sparse/)BibTeX
@inproceedings{kyrillidis2013icml-sparse,
title = {{Sparse Projections onto the Simplex}},
author = {Kyrillidis, Anastasios and Becker, Stephen and Cevher, Volkan and Koch, Christoph},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {235-243},
volume = {28},
url = {https://mlanthology.org/icml/2013/kyrillidis2013icml-sparse/}
}