Contrastive Credibility Propagation for Reliable Semi-Supervised Learning

Abstract

Producing labels for unlabeled data is error-prone, making semi-supervised learning (SSL) troublesome. Often, little is known about when and why an algorithm fails to outperform a supervised baseline. Using benchmark datasets, we craft five common real-world SSL data scenarios: few-label, open-set, noisy-label, and class distribution imbalance/misalignment in the labeled and unlabeled sets. We propose a novel algorithm called Contrastive Credibility Propagation (CCP) for deep SSL via iterative transductive pseudo-label refinement. CCP unifies semi-supervised learning and noisy label learning for the goal of reliably outperforming a supervised baseline in any data scenario. Compared to prior methods which focus on a subset of scenarios, CCP uniquely outperforms the supervised baseline in all scenarios, supporting practitioners when the qualities of labeled or unlabeled data are unknown.

Cite

Text

Kutt et al. "Contrastive Credibility Propagation for Reliable Semi-Supervised Learning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I19.30124

Markdown

[Kutt et al. "Contrastive Credibility Propagation for Reliable Semi-Supervised Learning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kutt2024aaai-contrastive/) doi:10.1609/AAAI.V38I19.30124

BibTeX

@inproceedings{kutt2024aaai-contrastive,
  title     = {{Contrastive Credibility Propagation for Reliable Semi-Supervised Learning}},
  author    = {Kutt, Brody and Ramteke, Pralay and Mignot, Xavier and Toman, Pamela and Ramanan, Nandini and Chhetri, Sujit Rokka and Huang, Shan and Du, Min and Hewlett, William},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {21294-21303},
  doi       = {10.1609/AAAI.V38I19.30124},
  url       = {https://mlanthology.org/aaai/2024/kutt2024aaai-contrastive/}
}