Aggregation and Sparsity via L1 Penalized Least Squares

Abstract

This paper shows that near optimal rates of aggregation and adaptation to unknown sparsity can be simultaneously achieved via ℓ_1 penalized least squares in a nonparametric regression setting. The main tool is a novel oracle inequality on the sum between the empirical squared loss of the penalized least squares estimate and a term reflecting the sparsity of the unknown regression function.

Cite

Text

Bunea et al. "Aggregation and Sparsity via L1 Penalized Least Squares." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_29

Markdown

[Bunea et al. "Aggregation and Sparsity via L1 Penalized Least Squares." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/bunea2006colt-aggregation/) doi:10.1007/11776420_29

BibTeX

@inproceedings{bunea2006colt-aggregation,
  title     = {{Aggregation and Sparsity via L1 Penalized Least Squares}},
  author    = {Bunea, Florentina and Tsybakov, Alexandre B. and Wegkamp, Marten H.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2006},
  pages     = {379-391},
  doi       = {10.1007/11776420_29},
  url       = {https://mlanthology.org/colt/2006/bunea2006colt-aggregation/}
}