An Empirical Study on Disentanglement of Negative-Free Contrastive Learning
Abstract
Negative-free contrastive learning methods have attracted a lot of attention with simplicity and impressive performances for large-scale pretraining. However, its disentanglement property remains unexplored. In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. We find that existing disentanglement metrics fail to make meaningful measurements for high-dimensional representation models, so we propose a new disentanglement metric based on Mutual Information between latent representations and data factors. With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset CelebA. Our study shows that the investigated methods can learn a well-disentangled subset of representation. As far as we know, we are the first to extend the study of disentangled representation learning to high-dimensional representation space and introduce negative-free contrastive learning methods into this area. The source code of this paper is available at https://github.com/noahcao/disentanglementlibmed.
Cite
Text
Cao et al. "An Empirical Study on Disentanglement of Negative-Free Contrastive Learning." Neural Information Processing Systems, 2022.Markdown
[Cao et al. "An Empirical Study on Disentanglement of Negative-Free Contrastive Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/cao2022neurips-empirical/)BibTeX
@inproceedings{cao2022neurips-empirical,
title = {{An Empirical Study on Disentanglement of Negative-Free Contrastive Learning}},
author = {Cao, Jinkun and Nai, Ruiqian and Yang, Qing and Huang, Jialei and Gao, Yang},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/cao2022neurips-empirical/}
}