Delegating Classifiers

Abstract

A sensible use of classifiers must be based on the estimated reliability of their predictions. A cautious classifier would delegate the difficult or uncertain predictions to other, possibly more specialised, classifiers. In this paper we analyse and develop this idea of delegating classifiers in a systematic way. First, we design a two-step scenario where a first classifier chooses which examples to classify and delegates the difficult examples to train a second classifier. Secondly, we present an iterated scenario involving an arbitrary number of chained classifiers. We compare these scenarios to classical ensemble methods, such as bagging and boosting. We show experimentally that our approach is not far behind these methods in terms of accuracy, but with several advantages: (i) improved efficiency, since each classifier learns from fewer examples than the previous one; (ii) improved comprehensibility, since each classification derives from a single classifier; and (iii) the possibility to simplify the overall multiclassifier by removing the parts that lead to delegation.

Cite

Text

Ferri et al. "Delegating Classifiers." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015395

Markdown

[Ferri et al. "Delegating Classifiers." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/ferri2004icml-delegating/) doi:10.1145/1015330.1015395

BibTeX

@inproceedings{ferri2004icml-delegating,
  title     = {{Delegating Classifiers}},
  author    = {Ferri, César and Flach, Peter A. and Hernández-Orallo, José},
  booktitle = {International Conference on Machine Learning},
  year      = {2004},
  doi       = {10.1145/1015330.1015395},
  url       = {https://mlanthology.org/icml/2004/ferri2004icml-delegating/}
}