Aggregation of Supports Along the Lasso Path

Abstract

In linear regression with fixed design, we propose two procedures that aggregate a data-driven collection of supports. The collection is a subset of the $2^p$ possible supports and both its cardinality and its elements can depend on the data. The procedures satisfy oracle inequalities with no assumption on the design matrix. Then we use these procedures to aggregate the supports that appear on the regularization path of the Lasso in order to construct an estimator that mimics the best Lasso estimator. If the restricted eigenvalue condition on the design matrix is satisfied, then this estimator achieves optimal prediction bounds. Finally, we discuss the computational cost of these procedures.

Cite

Text

Bellec. "Aggregation of Supports Along the Lasso Path." Annual Conference on Computational Learning Theory, 2016.

Markdown

[Bellec. "Aggregation of Supports Along the Lasso Path." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/bellec2016colt-aggregation/)

BibTeX

@inproceedings{bellec2016colt-aggregation,
  title     = {{Aggregation of Supports Along the Lasso Path}},
  author    = {Bellec, Pierre C.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2016},
  pages     = {488-529},
  url       = {https://mlanthology.org/colt/2016/bellec2016colt-aggregation/}
}