Gradient LASSO for Feature Selection

Abstract

LASSO (Least Absolute Shrinkage and Selection Operator) is a useful tool toachieve the shrinkage and variable selection simultaneously. Since LASSO uses the $L_1$ penalty, the optimization should rely on the quadratic program (QP) or generalnon-linear program which is known to be computational intensive. In this paper, we propose agradient descent algorithm for LASSO. Even though the final result is slightly less accurate, the proposed algorithmis computationally simpler than QP or non-linear program, and so can be applied to large size problems. We provide the convergence rate of the algorithm, and illustrate it withsimulated models as well as real data sets.

Cite

Text

Kim and Kim. "Gradient LASSO for Feature Selection." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015364

Markdown

[Kim and Kim. "Gradient LASSO for Feature Selection." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/kim2004icml-gradient/) doi:10.1145/1015330.1015364

BibTeX

@inproceedings{kim2004icml-gradient,
  title     = {{Gradient LASSO for Feature Selection}},
  author    = {Kim, Yongdai and Kim, Jinseog},
  booktitle = {International Conference on Machine Learning},
  year      = {2004},
  doi       = {10.1145/1015330.1015364},
  url       = {https://mlanthology.org/icml/2004/kim2004icml-gradient/}
}