A General Framework for Fast Stagewise Algorithms
Abstract
Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount $\epsilon$) of the variable that achieves the maximal absolute inner product with the current residual. This procedure has an interesting connection to the lasso: under some conditions, it is known that the sequence of forward stagewise estimates exactly coincides with the lasso path, as the step size $\epsilon$ goes to zero. Furthermore, essentially the same equivalence holds outside of least squares regression, with the minimization of a differentiable convex loss function subject to an $\ell_1$ norm constraint (the stagewise algorithm now updates the coefficient corresponding to the maximal absolute component of the gradient).
Cite
Text
Tibshirani. "A General Framework for Fast Stagewise Algorithms." Journal of Machine Learning Research, 2015.Markdown
[Tibshirani. "A General Framework for Fast Stagewise Algorithms." Journal of Machine Learning Research, 2015.](https://mlanthology.org/jmlr/2015/tibshirani2015jmlr-general/)BibTeX
@article{tibshirani2015jmlr-general,
title = {{A General Framework for Fast Stagewise Algorithms}},
author = {Tibshirani, Ryan J.},
journal = {Journal of Machine Learning Research},
year = {2015},
pages = {2543-2588},
volume = {16},
url = {https://mlanthology.org/jmlr/2015/tibshirani2015jmlr-general/}
}