Active Negative Loss Functions for Learning with Noisy Labels

Abstract

Robust loss functions are essential for training deep neural networks in the presence of noisy labels. Some robust loss functions use Mean Absolute Error (MAE) as its necessary component. For example, the recently proposed Active Passive Loss (APL) uses MAE as its passive loss function. However, MAE treats every sample equally, slows down the convergence and can make training difficult. In this work, we propose a new class of theoretically robust passive loss functions different from MAE, namely Normalized Negative Loss Functions (NNLFs), which focus more on memorized clean samples. By replacing the MAE in APL with our proposed NNLFs, we improve APL and propose a new framework called Active Negative Loss (ANL). Experimental results on benchmark and real-world datasets demonstrate that the new set of loss functions created by our ANL framework can outperform state-of-the-art methods. The code is available athttps://github.com/Virusdoll/Active-Negative-Loss.

Cite

Text

Ye et al. "Active Negative Loss Functions for Learning with Noisy Labels." Neural Information Processing Systems, 2023.

Markdown

[Ye et al. "Active Negative Loss Functions for Learning with Noisy Labels." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/ye2023neurips-active/)

BibTeX

@inproceedings{ye2023neurips-active,
  title     = {{Active Negative Loss Functions for Learning with Noisy Labels}},
  author    = {Ye, Xichen and Li, Xiaoqiang and Dai, Songmin and Liu, Tong and Sun, Yan and Tong, Weiqin},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/ye2023neurips-active/}
}