Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

Abstract

Prior works have found it beneficial to combine provably noise-robust loss functions e.g., mean absolute error (MAE) with standard categorical loss function e.g. cross entropy (CE) to improve their learnability. Here, we propose to use Jensen-Shannon divergence as a noise-robust loss function and show that it interestingly interpolate between CE and MAE with a controllable mixing parameter. Furthermore, we make a crucial observation that CE exhibit lower consistency around noisy data points. Based on this observation, we adopt a generalized version of the Jensen-Shannon divergence for multiple distributions to encourage consistency around data points. Using this loss function, we show state-of-the-art results on both synthetic (CIFAR), and real-world (e.g., WebVision) noise with varying noise rates.

Cite

Text

Englesson and Azizpour. "Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels." Neural Information Processing Systems, 2021.

Markdown

[Englesson and Azizpour. "Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/englesson2021neurips-generalized/)

BibTeX

@inproceedings{englesson2021neurips-generalized,
  title     = {{Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels}},
  author    = {Englesson, Erik and Azizpour, Hossein},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/englesson2021neurips-generalized/}
}