Boosting as a Kernel-Based Method

Abstract

Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the $\ell _2$ ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel . The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the $\ell _1$ ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.

Cite

Text

Aravkin et al. "Boosting as a Kernel-Based Method." Machine Learning, 2019. doi:10.1007/S10994-019-05797-Z

Markdown

[Aravkin et al. "Boosting as a Kernel-Based Method." Machine Learning, 2019.](https://mlanthology.org/mlj/2019/aravkin2019mlj-boosting/) doi:10.1007/S10994-019-05797-Z

BibTeX

@article{aravkin2019mlj-boosting,
  title     = {{Boosting as a Kernel-Based Method}},
  author    = {Aravkin, Aleksandr Y. and Bottegal, Giulio and Pillonetto, Gianluigi},
  journal   = {Machine Learning},
  year      = {2019},
  pages     = {1951-1974},
  doi       = {10.1007/S10994-019-05797-Z},
  volume    = {108},
  url       = {https://mlanthology.org/mlj/2019/aravkin2019mlj-boosting/}
}