A Convergence Rate Analysis for LogitBoost, MART and Their Variant

Abstract

LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O(\frac1T) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.

Cite

Text

Sun et al. "A Convergence Rate Analysis for LogitBoost, MART and Their Variant." International Conference on Machine Learning, 2014.

Markdown

[Sun et al. "A Convergence Rate Analysis for LogitBoost, MART and Their Variant." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/sun2014icml-convergence/)

BibTeX

@inproceedings{sun2014icml-convergence,
  title     = {{A Convergence Rate Analysis for LogitBoost, MART and Their Variant}},
  author    = {Sun, Peng and Zhang, Tong and Zhou, Jie},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {1251-1259},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/sun2014icml-convergence/}
}