ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label Learning
Abstract
Noisy partial label learning (noisy PLL) is an important branch of weakly supervised learning. Unlike PLL where the ground-truth label must conceal in the candidate label set, noisy PLL relaxes this constraint and allows the ground-truth label may not be in the candidate label set. To address this challenging problem, most of the existing works attempt to detect noisy samples and estimate the ground-truth label for each noisy sample. However, detection errors are unavoidable. These errors can accumulate during training and continuously affect model optimization. To this end, we propose a novel framework for noisy PLL with theoretical interpretations, called ``Adjusting Label Importance Mechanism (ALIM)''. It aims to reduce the negative impact of detection errors by trading off the initial candidate set and model outputs. ALIM is a plug-in strategy that can be integrated with existing PLL approaches. Experimental results on multiple benchmark datasets demonstrate that our method can achieve state-of-the-art performance on noisy PLL. Our code is available at: https://github.com/zeroQiaoba/ALIM.
Cite
Text
Xu et al. "ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label Learning." Neural Information Processing Systems, 2023.Markdown
[Xu et al. "ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/xu2023neurips-alim/)BibTeX
@inproceedings{xu2023neurips-alim,
title = {{ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label Learning}},
author = {Xu, Mingyu and Lian, Zheng and Feng, Lei and Liu, Bin and Tao, Jianhua},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/xu2023neurips-alim/}
}