Rethinking Label Refurbishment: Model Robustness Under Label Noise

Abstract

A family of methods that generate soft labels by mixing the hard labels with a certain distribution, namely label refurbishment, are widely used to train deep neural networks. However, some of these methods are still poorly understood in the presence of label noise. In this paper, we revisit four label refurbishment methods and reveal the strong connection between them. We find that they affect the neural network models in different manners. Two of them smooth the estimated posterior for regularization effects, and the other two force the model to produce high-confidence predictions. We conduct extensive experiments to evaluate related methods and observe that both effects improve the model generalization under label noise. Furthermore, we theoretically show that both effects lead to generalization guarantees on the clean distribution despite being trained with noisy labels.

Cite

Text

Lu et al. "Rethinking Label Refurbishment: Model Robustness Under Label Noise." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I12.26751

Markdown

[Lu et al. "Rethinking Label Refurbishment: Model Robustness Under Label Noise." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/lu2023aaai-rethinking/) doi:10.1609/AAAI.V37I12.26751

BibTeX

@inproceedings{lu2023aaai-rethinking,
  title     = {{Rethinking Label Refurbishment: Model Robustness Under Label Noise}},
  author    = {Lu, Yangdi and Xu, Zhiwei and He, Wenbo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {15000-15008},
  doi       = {10.1609/AAAI.V37I12.26751},
  url       = {https://mlanthology.org/aaai/2023/lu2023aaai-rethinking/}
}