ComRank: Ranking Loss for Multi-Label Complementary Label Learning

Abstract

Multi-label complementary label learning (MLCLL) is a weakly supervised paradigm that addresses multi-label learning (MLL) tasks using complementary labels (i.e., irrelevant labels) instead of relevant labels. Existing methods typically adopt an unbiased risk estimator (URE) under the assumption that complementary labels follow a uniform distribution. However, this assumption fails in real-world scenarios due to instance-specific annotation biases, making URE-based methods ineffective under such conditions. Furthermore, existing methods underutilize label correlations inherent in MLL. To address these limitations, we propose ComRank, a ranking loss framework for MLCLL, which encourages complementary labels to be ranked lower than non-complementary ones, thereby modeling pairwise label relationships. Theoretically, our surrogate loss ensures Bayes consistency under both uniform and biased cases. Experiments demonstrate the effectiveness of our method in MLCLL tasks. The code is available at https://github.com/JellyJamZhu/ComRank.

Cite

Text

Zhu et al. "ComRank: Ranking Loss for Multi-Label Complementary Label Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhu et al. "ComRank: Ranking Loss for Multi-Label Complementary Label Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-comrank/)

BibTeX

@inproceedings{zhu2025neurips-comrank,
  title     = {{ComRank: Ranking Loss for Multi-Label Complementary Label Learning}},
  author    = {Zhu, Jing-Yi and Gao, Yi and Xu, Miao and Zhang, Min-Ling},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhu2025neurips-comrank/}
}