Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss

Abstract

The consistency of classification algorithm plays a central role in statistical learning theory. A consistent algorithm guarantees us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. We consider the consistency of ERM scheme over classes of combinations of very simple rules (base classifiers) in multiclass classification. Our approach is, under some mild conditions, to establish a quantitative relationship between classification errors and convex risks. In comparison with the related previous work, the feature of our result is that the conditions are mainly expressed in terms of the differences between some values of the convex function.

Cite

Text

Chen and Sun. "Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss." Journal of Machine Learning Research, 2006.

Markdown

[Chen and Sun. "Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss." Journal of Machine Learning Research, 2006.](https://mlanthology.org/jmlr/2006/chen2006jmlr-consistency/)

BibTeX

@article{chen2006jmlr-consistency,
  title     = {{Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss}},
  author    = {Chen, Di-Rong and Sun, Tao},
  journal   = {Journal of Machine Learning Research},
  year      = {2006},
  pages     = {2435-2447},
  volume    = {7},
  url       = {https://mlanthology.org/jmlr/2006/chen2006jmlr-consistency/}
}