Simple Bayesian Classifiers Do Not Assume Independence
Abstract
Bayes` theorem tells us how to optimally predict the class of a previously unseen example, given a training sample. The chosen class should be the one which maximizes P(C{sub i}/E) = P(C{sub i}) P(E/C{sub i}) / P(E), where C{sub i} is the ith class, E is the test example, P(Y/X) denotes the conditional probability of Y given X, and probabilities are estimated from the training sample. Let an example be a vector of a attributes. If the attributes are independent given the class, P(E{sub i}C{sub i}) can be decomposed into the product P(v{sub i}/C{sub i}) ... P(V{sub a}/C{sub i}), where v{sub i} is the value of the jth attribute in the example E.
Cite
Text
Domingos and Pazzani. "Simple Bayesian Classifiers Do Not Assume Independence." AAAI Conference on Artificial Intelligence, 1996.Markdown
[Domingos and Pazzani. "Simple Bayesian Classifiers Do Not Assume Independence." AAAI Conference on Artificial Intelligence, 1996.](https://mlanthology.org/aaai/1996/domingos1996aaai-simple/)BibTeX
@inproceedings{domingos1996aaai-simple,
title = {{Simple Bayesian Classifiers Do Not Assume Independence}},
author = {Domingos, Pedro M. and Pazzani, Michael J.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1996},
pages = {1386},
url = {https://mlanthology.org/aaai/1996/domingos1996aaai-simple/}
}