Relative Contrastive Loss for Unsupervised Representation Learning
Abstract
Defining positive and negative samples is critical for learning visual variations of the semantic classes in an unsupervised manner. Previous methods either construct positive sample pairs as different data augmentations on the same image (i.e., single-instance-positive) or estimate a class prototype by clustering (i.e., prototype-positive), both ignoring the relative nature of positive/negative concepts in the real world. Motivated by the ability of humans in recognizing relatively positive/negative samples, we propose the Relative Contrastive Loss (RCL) to learn feature representation from relatively positive/negative pairs, which not only learns more real world semantic variations than the single-instance-positive methods but also respects positive-negative relativeness compared with absolute prototype-positive methods. The proposed RCL improves the linear evaluation for MoCo v3 by \textbf{+2.0\%} on ImageNet. Code will be released publicly upon acceptance.
Cite
Text
Tang et al. "Relative Contrastive Loss for Unsupervised Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19812-0_1Markdown
[Tang et al. "Relative Contrastive Loss for Unsupervised Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/tang2022eccv-relative/) doi:10.1007/978-3-031-19812-0_1BibTeX
@inproceedings{tang2022eccv-relative,
title = {{Relative Contrastive Loss for Unsupervised Representation Learning}},
author = {Tang, Shixiang and Zhu, Feng and Bai, Lei and Zhao, Rui and Ouyang, Wanli},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19812-0_1},
url = {https://mlanthology.org/eccv/2022/tang2022eccv-relative/}
}