Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations

Abstract

We study the statistical convergence and consistency of regularized Boosting methods, where the samples are not independent and identi- cally distributed (i.i.d.) but come from empirical processes of stationary β-mixing sequences. Utilizing a technique that constructs a sequence of independent blocks close in distribution to the original samples, we prove the consistency of the composite classifiers resulting from a regulariza- tion achieved by restricting the 1-norm of the base classifiers’ weights. When compared to the i.i.d. case, the nature of sampling manifests in the consistency result only through generalization of the original condition on the growth of the regularization parameter.

Cite

Text

Lozano et al. "Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations." Neural Information Processing Systems, 2005.

Markdown

[Lozano et al. "Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations." Neural Information Processing Systems, 2005.](https://mlanthology.org/neurips/2005/lozano2005neurips-convergence/)

BibTeX

@inproceedings{lozano2005neurips-convergence,
  title     = {{Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations}},
  author    = {Lozano, Aurelie C. and Kulkarni, Sanjeev R. and Schapire, Robert E.},
  booktitle = {Neural Information Processing Systems},
  year      = {2005},
  pages     = {819-826},
  url       = {https://mlanthology.org/neurips/2005/lozano2005neurips-convergence/}
}