Deep k-NN for Noisy Labels

Abstract

Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than many recently proposed methods. We also provide new statistical guarantees into its efficacy.

Cite

Text

Bahri et al. "Deep k-NN for Noisy Labels." International Conference on Machine Learning, 2020.

Markdown

[Bahri et al. "Deep k-NN for Noisy Labels." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/bahri2020icml-deep/)

BibTeX

@inproceedings{bahri2020icml-deep,
  title     = {{Deep k-NN for Noisy Labels}},
  author    = {Bahri, Dara and Jiang, Heinrich and Gupta, Maya},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {540-550},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/bahri2020icml-deep/}
}