Gradient Boosting Performs Gaussian Process Inference

Abstract

This paper shows that gradient boosting based on symmetric decision trees can be equivalently reformulated as a kernel method that converges to the solution of a certain Kernel Ridge Regression problem. Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance. We show that the proposed sampler allows for better knowledge uncertainty estimates leading to improved out-of-domain detection.

Cite

Text

Ustimenko et al. "Gradient Boosting Performs Gaussian Process Inference." International Conference on Learning Representations, 2023.

Markdown

[Ustimenko et al. "Gradient Boosting Performs Gaussian Process Inference." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/ustimenko2023iclr-gradient/)

BibTeX

@inproceedings{ustimenko2023iclr-gradient,
  title     = {{Gradient Boosting Performs Gaussian Process Inference}},
  author    = {Ustimenko, Aleksei and Beliakov, Artem and Prokhorenkova, Liudmila},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/ustimenko2023iclr-gradient/}
}