Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

Abstract

We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures --- stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers --- demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.

Cite

Text

Patrini et al. "Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.240

Markdown

[Patrini et al. "Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/patrini2017cvpr-making/) doi:10.1109/CVPR.2017.240

BibTeX

@inproceedings{patrini2017cvpr-making,
  title     = {{Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach}},
  author    = {Patrini, Giorgio and Rozza, Alessandro and Menon, Aditya Krishna and Nock, Richard and Qu, Lizhen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.240},
  url       = {https://mlanthology.org/cvpr/2017/patrini2017cvpr-making/}
}