Revisiting Consistency Regularization for Deep Partial Label Learning
Abstract
Partial label learning (PLL), which refers to the classification task where each training instance is ambiguously annotated with a set of candidate labels, has been recently studied in deep learning paradigm. Despite advances in recent deep PLL literature, existing methods (e.g., methods based on self-training or contrastive learning) are confronted with either ineffectiveness or inefficiency. In this paper, we revisit a simple idea namely consistency regularization, which has been shown effective in traditional PLL literature, to guide the training of deep models. Towards this goal, a new regularized training framework, which performs supervised learning on non-candidate labels and employs consistency regularization on candidate labels, is proposed for PLL. We instantiate the regularization term by matching the outputs of multiple augmentations of an instance to a conformal label distribution, which can be adaptively inferred by the closed-form solution. Experiments on benchmark datasets demonstrate the superiority of the proposed method compared with other state-of-the-art methods.
Cite
Text
Wu et al. "Revisiting Consistency Regularization for Deep Partial Label Learning." International Conference on Machine Learning, 2022.Markdown
[Wu et al. "Revisiting Consistency Regularization for Deep Partial Label Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/wu2022icml-revisiting/)BibTeX
@inproceedings{wu2022icml-revisiting,
title = {{Revisiting Consistency Regularization for Deep Partial Label Learning}},
author = {Wu, Dong-Dong and Wang, Deng-Bao and Zhang, Min-Ling},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {24212-24225},
volume = {162},
url = {https://mlanthology.org/icml/2022/wu2022icml-revisiting/}
}