Distribution Aligning Refinery of Pseudo-Label for Imbalanced Semi-Supervised Learning
Abstract
While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced. However, these SSL algorithms trained under imbalanced class distributions can severely suffer when generalizing to a balanced testing criterion, since they utilize biased pseudo-labels of unlabeled data toward majority classes. To alleviate this issue, we formulate a convex optimization problem to softly refine the pseudo-labels generated from the biased model, and develop a simple algorithm, named Distribution Aligning Refinery of Pseudo-label (DARP) that solves it provably and efficiently. Under various class imbalanced semi-supervised scenarios, we demonstrate the effectiveness of DARP and its compatibility with state-of-the-art SSL schemes.
Cite
Text
Kim et al. "Distribution Aligning Refinery of Pseudo-Label for Imbalanced Semi-Supervised Learning." Neural Information Processing Systems, 2020.Markdown
[Kim et al. "Distribution Aligning Refinery of Pseudo-Label for Imbalanced Semi-Supervised Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/kim2020neurips-distribution/)BibTeX
@inproceedings{kim2020neurips-distribution,
title = {{Distribution Aligning Refinery of Pseudo-Label for Imbalanced Semi-Supervised Learning}},
author = {Kim, Jaehyung and Hur, Youngbum and Park, Sejun and Yang, Eunho and Hwang, Sung Ju and Shin, Jinwoo},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/kim2020neurips-distribution/}
}