Robust Regression by Boosting the Median

Abstract

Most boosting regression algorithms use the weighted average of base regressors as their final regressor. In this paper we analyze the choice of the weighted median. We propose a general boosting algorithm based on this approach. We prove boosting-type convergence of the algorithm and give clear conditions for the convergence of the robust training error. The algorithm recovers $\textsc{AdaBoost}$ and $\textsc{AdaBoost}_\varrho$ as special cases. For boosting confidence-rated predictions, it leads to a new approach that outputs a different decision and interprets robustness in a different manner than the approach based on the weighted average. In the general, non-binary case we suggest practical strategies based on the analysis of the algorithm and experiments.

Cite

Text

Kégl. "Robust Regression by Boosting the Median." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_20

Markdown

[Kégl. "Robust Regression by Boosting the Median." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/kegl2003colt-robust/) doi:10.1007/978-3-540-45167-9_20

BibTeX

@inproceedings{kegl2003colt-robust,
  title     = {{Robust Regression by Boosting the Median}},
  author    = {Kégl, Balázs},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2003},
  pages     = {258-272},
  doi       = {10.1007/978-3-540-45167-9_20},
  url       = {https://mlanthology.org/colt/2003/kegl2003colt-robust/}
}