Efficient Lazy Elimination for Averaged One-Dependence Estimators

Abstract

Semi-naive Bayesian classifiers seek to retain the numerous strengths of naive Bayes while reducing error by relaxing the attribute independence assumption. Backwards Sequential Elimination (BSE) is a wrapper technique for attribute elimination that has proved effective at this task. We explore a new technique, Lazy Elimination (LE), which eliminates highly related attribute-values at classification time without the computational overheads inherent in wrapper techniques. We analyze the effect of LE and BSE on a state-of-the-art semi-naive Bayesian algorithm Averaged One-Dependence Estimators (AODE). Our experiments show that LE significantly reduces bias and error without undue computation, while BSE significantly reduces bias but not error, with high training time complexity. In the context of AODE, LE has a significant advantage over BSE in both computational efficiency and error.

Cite

Text

Zheng and Webb. "Efficient Lazy Elimination for Averaged One-Dependence Estimators." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143984

Markdown

[Zheng and Webb. "Efficient Lazy Elimination for Averaged One-Dependence Estimators." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/zheng2006icml-efficient/) doi:10.1145/1143844.1143984

BibTeX

@inproceedings{zheng2006icml-efficient,
  title     = {{Efficient Lazy Elimination for Averaged One-Dependence Estimators}},
  author    = {Zheng, Fei and Webb, Geoffrey I.},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {1113-1120},
  doi       = {10.1145/1143844.1143984},
  url       = {https://mlanthology.org/icml/2006/zheng2006icml-efficient/}
}