Combining Classifiers by Constructive Induction

Abstract

Using multiple classifiers for increasing learning accuracy is an active research area. In this paper we present a new general method for merging classifiers. The basic idea of Cascade Generalization is to sequentially run the set of classifiers, at each step performing an extension of the original data set by adding new attributes. The new attributes are derived from the probability class distribution given by a base classifier. This constructive step extends the representational language for the high level classifiers, relaxing their bias. Cascade Generalization produces a single but structured model for the data that combines the model class representation of the base classifiers. We have performed an empirical evaluation of Cascade composition of three well known classifiers: Naive Bayes, Linear Discriminant, and C4.5. Composite models show an increase of performance, sometimes impressive, when compared with the corresponding single models, with significant statistical confidence levels.

Cite

Text

Gama. "Combining Classifiers by Constructive Induction." European Conference on Machine Learning, 1998. doi:10.1007/BFB0026688

Markdown

[Gama. "Combining Classifiers by Constructive Induction." European Conference on Machine Learning, 1998.](https://mlanthology.org/ecmlpkdd/1998/gama1998ecml-combining/) doi:10.1007/BFB0026688

BibTeX

@inproceedings{gama1998ecml-combining,
  title     = {{Combining Classifiers by Constructive Induction}},
  author    = {Gama, João},
  booktitle = {European Conference on Machine Learning},
  year      = {1998},
  pages     = {178-189},
  doi       = {10.1007/BFB0026688},
  url       = {https://mlanthology.org/ecmlpkdd/1998/gama1998ecml-combining/}
}