Boosting with Structural Sparsity

Abstract

We derive generalizations of AdaBoost and related gradient-based coordinate descent methods that incorporate sparsity-promoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back-pruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the $\ell_1$, $\ell_2$, and $\ell_\infty$ norms of the predictor and introduce mixed-norm penalties that build upon the initial penalties. The mixed-norm regularizers facilitate structural sparsity in parameter space, which is a useful property in multiclass prediction and other related tasks. We report empirical results that demonstrate the power of our approach in building accurate and structurally sparse models.

Cite

Text

Duchi and Singer. "Boosting with Structural Sparsity." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553412

Markdown

[Duchi and Singer. "Boosting with Structural Sparsity." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/duchi2009icml-boosting/) doi:10.1145/1553374.1553412

BibTeX

@inproceedings{duchi2009icml-boosting,
  title     = {{Boosting with Structural Sparsity}},
  author    = {Duchi, John C. and Singer, Yoram},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {297-304},
  doi       = {10.1145/1553374.1553412},
  url       = {https://mlanthology.org/icml/2009/duchi2009icml-boosting/}
}