An Analysis of Rule Evaluation Metrics

Abstract

In this paper we analyze the most popular evaluation metrics for separate-and-conquer rule learning algorithms. Our results show that all commonly used heuristics, including accuracy, weighted relative accuracy, entropy, Gini index and information gain, are equivalent to one of two fundamental prototypes: precision, which tries to optimize the area under the ROC curve for unknown costs, and a cost-weighted difference between covered positive and negative examples, which tries to find the optimal point under known or assumed costs. We also show that a straightforward generalization of the m-estimate trades off these two prototypes. ICML Proceedings of the Twentieth International Conference on Machine Learning

Cite

Text

Fürnkranz and Flach. "An Analysis of Rule Evaluation Metrics." International Conference on Machine Learning, 2003.

Markdown

[Fürnkranz and Flach. "An Analysis of Rule Evaluation Metrics." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/furnkranz2003icml-analysis/)

BibTeX

@inproceedings{furnkranz2003icml-analysis,
  title     = {{An Analysis of Rule Evaluation Metrics}},
  author    = {Fürnkranz, Johannes and Flach, Peter A.},
  booktitle = {International Conference on Machine Learning},
  year      = {2003},
  pages     = {202-209},
  url       = {https://mlanthology.org/icml/2003/furnkranz2003icml-analysis/}
}