Sparse Prediction with the $k$-Support Norm

Abstract

We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an $\ell_2$ penalty. We show that this new norm provides a tighter relaxation than the elastic net, and is thus a good replacement for the Lasso or the elastic net in sparse prediction problems. But through studying our new norm, we also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use.

Cite

Text

Argyriou et al. "Sparse Prediction with the $k$-Support Norm." Neural Information Processing Systems, 2012.

Markdown

[Argyriou et al. "Sparse Prediction with the $k$-Support Norm." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/argyriou2012neurips-sparse/)

BibTeX

@inproceedings{argyriou2012neurips-sparse,
  title     = {{Sparse Prediction with the $k$-Support Norm}},
  author    = {Argyriou, Andreas and Foygel, Rina and Srebro, Nathan},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {1457-1465},
  url       = {https://mlanthology.org/neurips/2012/argyriou2012neurips-sparse/}
}