Robust Label Proportions Learning
Abstract
Learning from Label Proportions (LLP) is a weakly-supervised paradigm that uses bag-level label proportions to train instance-level classifiers, offering a practical alternative to costly instance-level annotation. However, the weak supervision makes effective training challenging, and existing methods often rely on pseudo-labeling, which introduces noise. To address this, we propose RLPL, a two-stage framework. In the first stage, we use unsupervised contrastive learning to pretrain the encoder and train an auxiliary classifier with bag-level supervision. In the second stage, we introduce an LLP-OTD mechanism to refine pseudo labels and split them into high- and low-confidence sets. These sets are then used in LLPMix to train the final classifier. Extensive experiments and ablation studies on multiple benchmarks demonstrate that RLPL achieves comparable state-of-the-art performance and effectively mitigates pseudo-label noise.
Cite
Text
Chen et al. "Robust Label Proportions Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "Robust Label Proportions Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-robust/)BibTeX
@inproceedings{chen2025neurips-robust,
title = {{Robust Label Proportions Learning}},
author = {Chen, Jueyu and Wen, Wantao and Wang, Yeqiang and Lin, Erliang and Wang, Yemin and Jia, Yuheng},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-robust/}
}