Eliminating Class Noise in Large Datasets

Abstract

This paper presents a new approach for identifying and eliminating mislabeled instances in large or distributed datasets. We first partition a dataset into subsets, each of which is small enough to be processed by an induction algorithm at one time. We construct good rules from each subset, and use the good rules to evaluate the whole dataset. For a given instance Ik, two error count variables are used to count the number of times it has been identified as noise by all subsets. The instance with higher error values will have a higher probability of being a mislabeled example. Two threshold schemes, majority and non-objection, are used to identify the noise. Experimental results and comparative studies from real-world datasets are reported to evaluate the effectiveness and efficiency of the proposed approach. ICML Proceedings of the Twentieth International Conference on Machine Learning

Cite

Text

Zhu et al. "Eliminating Class Noise in Large Datasets." International Conference on Machine Learning, 2003.

Markdown

[Zhu et al. "Eliminating Class Noise in Large Datasets." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/zhu2003icml-eliminating/)

BibTeX

@inproceedings{zhu2003icml-eliminating,
  title     = {{Eliminating Class Noise in Large Datasets}},
  author    = {Zhu, Xingquan and Wu, Xindong and Chen, Qijun},
  booktitle = {International Conference on Machine Learning},
  year      = {2003},
  pages     = {920-927},
  url       = {https://mlanthology.org/icml/2003/zhu2003icml-eliminating/}
}