Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Abstract
Deep neural networks (DNNs) have achieved tremendous success in a variety of applications across many disciplines. Yet, their superior performance comes with the expensive cost of requiring correctly annotated large-scale datasets. Moreover, due to DNNs' rich capacity, errors in training labels can hamper performance. To combat this problem, mean absolute error (MAE) has recently been proposed as a noise-robust alternative to the commonly-used categorical cross entropy (CCE) loss. However, as we show in this paper, MAE can perform poorly with DNNs and large-scale datasets. Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE. Proposed loss functions can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. We report results from experiments conducted with CIFAR-10, CIFAR-100 and FASHION-MNIST datasets and synthetically generated noisy labels.
Cite
Text
Zhang and Sabuncu. "Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels." Neural Information Processing Systems, 2018.Markdown
[Zhang and Sabuncu. "Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/zhang2018neurips-generalized/)BibTeX
@inproceedings{zhang2018neurips-generalized,
title = {{Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels}},
author = {Zhang, Zhilu and Sabuncu, Mert},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {8778-8788},
url = {https://mlanthology.org/neurips/2018/zhang2018neurips-generalized/}
}