Why Is Rule Learning Optimistic and How to Correct It
Abstract
In their search through a huge space of possible hypotheses, rule induction algorithms compare estimations of qualities of a large number of rules to find the one that appears to be best. This mechanism can easily find random patterns in the data which will – even though the estimating method itself may be unbiased (such as relative frequency) – have optimistically high quality estimates. It is generally believed that the problem, which eventually leads to overfitting, can be alleviated by using m-estimate of probability. We show that this can only partially mend the problem, and propose a novel solution to making the common rule evaluation functions account for multiple comparisons in the search. Experiments on artificial data sets and data sets from the UCI repository show a large improvement in accuracy of probability predictions and also a decent gain in AUC of the constructed models.
Cite
Text
Mozina et al. "Why Is Rule Learning Optimistic and How to Correct It." European Conference on Machine Learning, 2006. doi:10.1007/11871842_33Markdown
[Mozina et al. "Why Is Rule Learning Optimistic and How to Correct It." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/mozina2006ecml-rule/) doi:10.1007/11871842_33BibTeX
@inproceedings{mozina2006ecml-rule,
title = {{Why Is Rule Learning Optimistic and How to Correct It}},
author = {Mozina, Martin and Demsar, Janez and Zabkar, Jure and Bratko, Ivan},
booktitle = {European Conference on Machine Learning},
year = {2006},
pages = {330-340},
doi = {10.1007/11871842_33},
url = {https://mlanthology.org/ecmlpkdd/2006/mozina2006ecml-rule/}
}