Classifiers: A Theoretical and Empirical Study

Abstract

This paper describes how a competitive tree learning algorithm can be derived from first principles. The algorithm approximates the Bayesian decision theoretic solution to the learning task. Comparative experiments with the algorithm and the several mature AI and statistical families of tree learning algorithms currently in use show the derived Bayesian algorithm is consistently as good or better, although sometimes at computational cost. Using the same strategy, we can design algorithms for many other supervised and model learning tasks given just a probabilistic representation for the kind of knowledge to be learned. As an illustration, a second learning algorithm is derived for learning Bayesian networks from data. Implications to incremental learning and the use of multiple models are also discussed. 1 Introduction Systems for learning classification trees [ Quinlan, 1986; Cestnik et al., 1987 ] are common in machine learning, statistics and pattern recognition. Despite these suc...

Cite

Text

Buntine. "Classifiers: A Theoretical and Empirical Study." International Joint Conference on Artificial Intelligence, 1991.

Markdown

[Buntine. "Classifiers: A Theoretical and Empirical Study." International Joint Conference on Artificial Intelligence, 1991.](https://mlanthology.org/ijcai/1991/buntine1991ijcai-classifiers/)

BibTeX

@inproceedings{buntine1991ijcai-classifiers,
  title     = {{Classifiers: A Theoretical and Empirical Study}},
  author    = {Buntine, Wray L.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1991},
  pages     = {638-644},
  url       = {https://mlanthology.org/ijcai/1991/buntine1991ijcai-classifiers/}
}