Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model

Abstract

The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations. It poses a great challenge for robustly training Deep Neural Networks (DNNs). Existing learning methods with label noise either employ ad-hoc heuristics or restrict to specific noise assumptions. However, more general situations, such as instance-dependent label noise, have not been fully explored, as scarce studies focus on their label corruption process. By categorizing instances into confusing and unconfusing instances, this paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances. The resultant model can be realized by DNNs, where the training procedure is accomplished by employing a novel alternating optimization algorithm. Experiments on datasets with both synthetic and real-world label noise verify the proposed method yields significant improvements on robustness over state-of-the-art counterparts.

Cite

Text

Wang et al. "Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I11.17221

Markdown

[Wang et al. "Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/wang2021aaai-tackling/) doi:10.1609/AAAI.V35I11.17221

BibTeX

@inproceedings{wang2021aaai-tackling,
  title     = {{Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model}},
  author    = {Wang, Qizhou and Han, Bo and Liu, Tongliang and Niu, Gang and Yang, Jian and Gong, Chen},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {10183-10191},
  doi       = {10.1609/AAAI.V35I11.17221},
  url       = {https://mlanthology.org/aaai/2021/wang2021aaai-tackling/}
}