Gradient Boosting for Kernelized Output Spaces

Abstract

A general framework is proposed for gradient boosting in supervised learning problems where the loss function is defined using a kernel over the output space. It extends boosting in a principled way to complex output spaces (images, text, graphs etc.) and can be applied to a general class of base learners working in kernelized output spaces. Empirical results are provided on three problems: a regression problem, an image completion task and a graph prediction problem. In these experiments, the framework is combined with tree-based base learners, which have interesting algorithmic properties. The results show that gradient boosting significantly improves these base learners and provides competitive results with other tree-based ensemble methods based on randomization.

Cite

Text

Geurts et al. "Gradient Boosting for Kernelized Output Spaces." International Conference on Machine Learning, 2007. doi:10.1145/1273496.1273533

Markdown

[Geurts et al. "Gradient Boosting for Kernelized Output Spaces." International Conference on Machine Learning, 2007.](https://mlanthology.org/icml/2007/geurts2007icml-gradient/) doi:10.1145/1273496.1273533

BibTeX

@inproceedings{geurts2007icml-gradient,
  title     = {{Gradient Boosting for Kernelized Output Spaces}},
  author    = {Geurts, Pierre and Wehenkel, Louis and d'Alché-Buc, Florence},
  booktitle = {International Conference on Machine Learning},
  year      = {2007},
  pages     = {289-296},
  doi       = {10.1145/1273496.1273533},
  url       = {https://mlanthology.org/icml/2007/geurts2007icml-gradient/}
}