Consistent Assignment for Representation Learning

Abstract

We introduce Consistent Assignment for Representation Learning (CARL). An unsupervised learning method to learn visual representations by combining contrastive learning with deep clustering. By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes that serve as energy anchors to enforce different views of a given image to be assigned to the same prototype. Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion, using gradient descent without the necessity of performing offline clustering or using non-differentiable algorithms to solve the cluster assignment problem. CARL achieves comparable results with current state-of-the-art methods in the CIFAR-10, -100, and STL10 datasets.

Cite

Text

Silva and Rivera. "Consistent Assignment for Representation Learning." ICLR 2021 Workshops: EBM, 2021.

Markdown

[Silva and Rivera. "Consistent Assignment for Representation Learning." ICLR 2021 Workshops: EBM, 2021.](https://mlanthology.org/iclrw/2021/silva2021iclrw-consistent/)

BibTeX

@inproceedings{silva2021iclrw-consistent,
  title     = {{Consistent Assignment for Representation Learning}},
  author    = {Silva, Thalles Santos and Rivera, Adín Ramírez},
  booktitle = {ICLR 2021 Workshops: EBM},
  year      = {2021},
  url       = {https://mlanthology.org/iclrw/2021/silva2021iclrw-consistent/}
}