Stagewise Lasso
Abstract
Many statistical machine learning algorithms minimize either an empirical loss function as in AdaBoost, or a penalized empirical loss as in Lasso or SVM. A single regularization tuning parameter controls the trade-off between fidelity to the data and generalizability, or equivalently between bias and variance. When this tuning parameter changes, a regularization "path" of solutions to the minimization problem is generated, and the whole path is needed to select a tuning parameter to optimize the prediction or interpretation performance. Algorithms such as homotopy-Lasso or LARS-Lasso and Forward Stagewise Fitting (FSF) (aka e-Boosting) are of great interest because of their resulted sparse models for interpretation in addition to prediction.
Cite
Text
Zhao and Yu. "Stagewise Lasso." Journal of Machine Learning Research, 2007.Markdown
[Zhao and Yu. "Stagewise Lasso." Journal of Machine Learning Research, 2007.](https://mlanthology.org/jmlr/2007/zhao2007jmlr-stagewise/)BibTeX
@article{zhao2007jmlr-stagewise,
title = {{Stagewise Lasso}},
author = {Zhao, Peng and Yu, Bin},
journal = {Journal of Machine Learning Research},
year = {2007},
pages = {2701-2726},
volume = {8},
url = {https://mlanthology.org/jmlr/2007/zhao2007jmlr-stagewise/}
}