SigCLR: Sigmoid Contrastive Learning of Visual Representations

Abstract

We propose SigCLR: Sigmoid Contrastive Learning of Visual Representations. SigCLR utilizes the logistic loss that only operates on pairs and does not require a global view as in the cross-entropy loss used in SimCLR. We show that logistic loss shows competitive performance on CIFAR-10, CIFAR-100, and Tiny-IN compared to other established SSL objectives. Our findings verify the importance of learnable bias as in the case of SigLUP, however, it requires a fixed temperature as in the SimCLR to excel. Overall, SigCLR is a promising replacement for the SimCLR which is ubiquitous and has shown tremendous success in various domains.

Cite

Text

Çağatan. "SigCLR: Sigmoid Contrastive Learning of Visual Representations." NeurIPS 2024 Workshops: SSL, 2024.

Markdown

[Çağatan. "SigCLR: Sigmoid Contrastive Learning of Visual Representations." NeurIPS 2024 Workshops: SSL, 2024.](https://mlanthology.org/neuripsw/2024/cagatan2024neuripsw-sigclr/)

BibTeX

@inproceedings{cagatan2024neuripsw-sigclr,
  title     = {{SigCLR: Sigmoid Contrastive Learning of Visual Representations}},
  author    = {Çağatan, Ömer Veysel},
  booktitle = {NeurIPS 2024 Workshops: SSL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/cagatan2024neuripsw-sigclr/}
}