Contrastive Learning with Consistent Representations
Abstract
Contrastive learning demonstrates great promise for representation learning. Data augmentations play a critical role in contrastive learning by providing informative views of the data without necessitating explicit labels. Nonetheless, the efficacy of current methodologies heavily hinges on the quality of employed data augmentation (DA) functions, often chosen manually from a limited set of options. While exploiting diverse data augmentations is appealing, the complexities inherent in both DAs and representation learning can lead to performance deterioration. Addressing this challenge and facilitating the systematic incorporation of diverse data augmentations, this paper proposes Contrastive Learning with Consistent Representations (CoCor). At the heart of CoCor is a novel consistency metric termed DA consistency. This metric governs the mapping of augmented input data to the representation space. Moreover, we propose to learn the optimal mapping locations as a function of DA. Experimental results demonstrate that CoCor notably enhances the generalizability and transferability of learned representations in comparison to baseline methods. The implementation of CoCor can be found at https://github.com/zihuwang97/CoCor.
Cite
Text
Wang et al. "Contrastive Learning with Consistent Representations." Transactions on Machine Learning Research, 2024.Markdown
[Wang et al. "Contrastive Learning with Consistent Representations." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/wang2024tmlr-contrastive/)BibTeX
@article{wang2024tmlr-contrastive,
title = {{Contrastive Learning with Consistent Representations}},
author = {Wang, Zihu and Wang, Yu and Chen, Zhuotong and Hu, Hanbin and Li, Peng},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/wang2024tmlr-contrastive/}
}