Instance-Dependent Label-Noise Learning Under a Structural Causal Model
Abstract
Label noise generally degenerates the performance of deep learning algorithms because deep neural networks easily overfit label errors. Let $X$ and $Y$ denote the instance and clean label, respectively. When $Y$ is a cause of $X$, according to which many datasets have been constructed, e.g., \textit{SVHN} and \textit{CIFAR}, the distributions of $P(X)$ and $P(Y|X)$ are generally entangled. This means that the unsupervised instances are helpful to learn the classifier and thus reduce the side effect of label noise. However, it remains elusive on how to exploit the causal information to handle the label-noise problem. We propose to model and make use of the causal process in order to correct the label-noise effect.Empirically, the proposed method outperforms all state-of-the-art methods on both synthetic and real-world label-noise datasets.
Cite
Text
Yao et al. "Instance-Dependent Label-Noise Learning Under a Structural Causal Model." Neural Information Processing Systems, 2021.Markdown
[Yao et al. "Instance-Dependent Label-Noise Learning Under a Structural Causal Model." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/yao2021neurips-instancedependent/)BibTeX
@inproceedings{yao2021neurips-instancedependent,
title = {{Instance-Dependent Label-Noise Learning Under a Structural Causal Model}},
author = {Yao, Yu and Liu, Tongliang and Gong, Mingming and Han, Bo and Niu, Gang and Zhang, Kun},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/yao2021neurips-instancedependent/}
}