On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.

Abstract

Though Self-supervised learning (SSL) has been widely studied as a promising technique for representation learning, it doesn't generalize well on long-tailed datasets due to the majority classes dominating the feature space. Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect. In this paper, we propose an alternative but easy-to-use and effective solution, \textbf{C}ontrastive with \textbf{O}ut-of-distribution (OOD) data for \textbf{L}ong-\textbf{T}ail learning (COLT), which can effectively exploit OOD data to dynamically re-balance the feature space. We empirically identify the counter-intuitive usefulness of OOD samples in SSL long-tailed learning and principally design a novel SSL method. Concretely, we first localize the `\emph{head}' and `\emph{tail}' samples by assigning a tailness score to each OOD sample based on its neighborhoods in the feature space. Then, we propose an online OOD sampling strategy to dynamically re-balance the feature space. Finally, we enforce the model to be capable of distinguishing ID and OOD samples by a distribution-level supervised contrastive loss. Extensive experiments are conducted on various datasets and several state-of-the-art SSL frameworks to verify the effectiveness of the proposed method. The results show that our method significantly improves the performance of SSL on long-tailed datasets by a large margin, and even outperforms previous work which uses external ID data. Our code is available at \url{https://github.com/JianhongBai/COLT}.

Cite

Text

Bai et al. "On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.." International Conference on Learning Representations, 2023.

Markdown

[Bai et al. "On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/bai2023iclr-effectiveness/)

BibTeX

@inproceedings{bai2023iclr-effectiveness,
  title     = {{On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.}},
  author    = {Bai, Jianhong and Liu, Zuozhu and Wang, Hualiang and Hao, Jin and Feng, Yang and Chu, Huanpeng and Hu, Haoji},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/bai2023iclr-effectiveness/}
}