Learning DNF via Probabilistic Evidence Combination

Abstract

One approach to learning DNF expressions from examples is to use a conjunctive learner to separately form each of the disjuncts. This paper describes a learning algorithm that follows this approach, extending our earlier work on learning conjunctive classifiers from noisy data to learning DNF classifiers from noisy data. Because every disjunct does not cover every positive example, such learners must decide which positive examples to cover with each disjunct being learned. The central idea here is to model as representational noise the uncertainty as to whether a positive example should be treated as positive for a particular disjunct, in addition to whatever other noise may be imposed on the data by the environment. In experiments with synthetic data our learning method exhibits statistically significantly lower error rates during early learning when compared to the C4.5 algorithm, and on seven out of eight real-world datasets the error rates of classifiers learned by the new algorithm meet or exceed those of the classifiers learned by C4.5.

Cite

Text

Norton and Hirsh. "Learning DNF via Probabilistic Evidence Combination." International Conference on Machine Learning, 1993. doi:10.1016/B978-1-55860-307-3.50035-6

Markdown

[Norton and Hirsh. "Learning DNF via Probabilistic Evidence Combination." International Conference on Machine Learning, 1993.](https://mlanthology.org/icml/1993/norton1993icml-learning/) doi:10.1016/B978-1-55860-307-3.50035-6

BibTeX

@inproceedings{norton1993icml-learning,
  title     = {{Learning DNF via Probabilistic Evidence Combination}},
  author    = {Norton, Steven W. and Hirsh, Haym},
  booktitle = {International Conference on Machine Learning},
  year      = {1993},
  pages     = {220-227},
  doi       = {10.1016/B978-1-55860-307-3.50035-6},
  url       = {https://mlanthology.org/icml/1993/norton1993icml-learning/}
}