Mutual Contrastive Learning for Visual Representation Learning
Abstract
We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of networks. A crucial component of MCL is Interactive Contrastive Learning (ICL). Compared with vanilla contrastive learning, ICL can aggregate cross-network embedding information and maximize the lower bound to the mutual information between two networks. This enables each network to learn extra contrastive knowledge from others, leading to better feature representations for visual recognition tasks. We emphasize that the resulting MCL is conceptually simple yet empirically powerful. It is a generic framework that can be applied to both supervised and self-supervised representation learning. Experimental results on image classification and transfer learning to object detection show that MCL can lead to consistent performance gains, demonstrating that MCL can guide the network to generate better feature representations. Code is available at https://github.com/winycg/MCL.
Cite
Text
Yang et al. "Mutual Contrastive Learning for Visual Representation Learning." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I3.20211Markdown
[Yang et al. "Mutual Contrastive Learning for Visual Representation Learning." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/yang2022aaai-mutual/) doi:10.1609/AAAI.V36I3.20211BibTeX
@inproceedings{yang2022aaai-mutual,
title = {{Mutual Contrastive Learning for Visual Representation Learning}},
author = {Yang, Chuanguang and An, Zhulin and Cai, Linhang and Xu, Yongjun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {3045-3053},
doi = {10.1609/AAAI.V36I3.20211},
url = {https://mlanthology.org/aaai/2022/yang2022aaai-mutual/}
}