Classifier Learning from Noisy Data as Probabilistic Evidence Combination

Abstract

This paper presents an approach to learning from noisy data that views the problem as one of reasoning under uncertainty, where prior knowledge of the noise process is applied to compute a posteriori probabilities over the hypothesis space. In preliminary experiments this maximum a posteriori (MAP) approach exhibits a learning rate advantage over the C4.5 algorithm that is statistically significant. Introduction The classifier learning problem is to use a set of labeled training data to induce a classifier that will accurately classify as yet unseen, unclassified testing data. Some approaches assume that the training data is correct [ Mitchell, 1982 ] . Some assume that noise is present and simply tolerate it [ Breiman et al., 1984; Quinlan, 1987 ] . Another approach is to exploit knowledge of the presence and nature of noise [ Hirsh, 1990b ] . This paper takes the third approach, and views classifier learning from noisy data as a problem of reasoning under uncertainty, where knowle...

Cite

Text

Norton and Hirsh. "Classifier Learning from Noisy Data as Probabilistic Evidence Combination." AAAI Conference on Artificial Intelligence, 1992.

Markdown

[Norton and Hirsh. "Classifier Learning from Noisy Data as Probabilistic Evidence Combination." AAAI Conference on Artificial Intelligence, 1992.](https://mlanthology.org/aaai/1992/norton1992aaai-classifier/)

BibTeX

@inproceedings{norton1992aaai-classifier,
  title     = {{Classifier Learning from Noisy Data as Probabilistic Evidence Combination}},
  author    = {Norton, Steven W. and Hirsh, Haym},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {1992},
  pages     = {141-146},
  url       = {https://mlanthology.org/aaai/1992/norton1992aaai-classifier/}
}