Identifying Mislabeled Data Using the Area Under the Margin Ranking

Abstract

Not all data in a typical training set help with generalization; some samples can be overly ambiguous or outrightly mislabeled. This paper introduces a new method to identify such samples and mitigate their impact when training neural networks. At the heart of our algorithm is the Area Under the Margin (AUM) statistic, which exploits differences in the training dynamics of clean and mislabeled samples. A simple procedure - adding an extra class populated with purposefully mislabeled threshold samples - learns a AUM upper bound that isolates mislabeled data. This approach consistently improves upon prior work on synthetic and real-world datasets. On the WebVision50 classification task our method removes 17% of training data, yielding a 1.6% (absolute) improvement in test error. On CIFAR100 removing 13% of the data leads to a 1.2% drop in error.

Cite

Text

Pleiss et al. "Identifying Mislabeled Data Using the Area Under the Margin Ranking." Neural Information Processing Systems, 2020.

Markdown

[Pleiss et al. "Identifying Mislabeled Data Using the Area Under the Margin Ranking." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/pleiss2020neurips-identifying/)

BibTeX

@inproceedings{pleiss2020neurips-identifying,
  title     = {{Identifying Mislabeled Data Using the Area Under the Margin Ranking}},
  author    = {Pleiss, Geoff and Zhang, Tianyi and Elenberg, Ethan and Weinberger, Kilian Q.},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/pleiss2020neurips-identifying/}
}