Sparse Boosting
Abstract
We propose Sparse Boosting (the SparseL2Boost algorithm), a variant on boosting with the squared error loss. SparseL2Boost yields sparser solutions than the previously proposed L2Boosting by minimizing some penalized L2-loss functions, the FPE model selection criteria, through small-step gradient descent. Although boosting may give already relatively sparse solutions, for example corresponding to the soft-thresholding estimator in orthogonal linear models, there is sometimes a desire for more sparseness to increase prediction accuracy and ability for better variable selection: such goals can be achieved with SparseL2Boost.
Cite
Text
Bühlmann and Yu. "Sparse Boosting." Journal of Machine Learning Research, 2006.Markdown
[Bühlmann and Yu. "Sparse Boosting." Journal of Machine Learning Research, 2006.](https://mlanthology.org/jmlr/2006/buhlmann2006jmlr-sparse/)BibTeX
@article{buhlmann2006jmlr-sparse,
title = {{Sparse Boosting}},
author = {Bühlmann, Peter and Yu, Bin},
journal = {Journal of Machine Learning Research},
year = {2006},
pages = {1001-1024},
volume = {7},
url = {https://mlanthology.org/jmlr/2006/buhlmann2006jmlr-sparse/}
}