On the Consistency of Feature Selection with Lasso for Non-Linear Targets

Abstract

An important question in feature selection is whether a selection strategy recovers the “true” set of features, given enough data. We study this question in the context of the popular Least Absolute Shrinkage and Selection Operator (Lasso) feature selection strategy. In particular, we consider the scenario when the model is misspecified so that the learned model is linear while the underlying real target is nonlinear. Surprisingly, we prove that under certain conditions, Lasso is still able to recover the correct features in this case. We also carry out numerical studies to empirically verify the theoretical results and explore the necessity of the conditions under which the proof holds.

Cite

Text

Zhang et al. "On the Consistency of Feature Selection with Lasso for Non-Linear Targets." International Conference on Machine Learning, 2016.

Markdown

[Zhang et al. "On the Consistency of Feature Selection with Lasso for Non-Linear Targets." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/zhang2016icml-consistency/)

BibTeX

@inproceedings{zhang2016icml-consistency,
  title     = {{On the Consistency of Feature Selection with Lasso for Non-Linear Targets}},
  author    = {Zhang, Yue and Guo, Weihong and Ray, Soumya},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {183-191},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/zhang2016icml-consistency/}
}