Boosting Contrastive Learning with Relation Knowledge Distillation

Abstract

While self-supervised representation learning (SSL) has proved to be effective in the large model, there is still a huge gap between the SSL and supervised method in the lightweight model when following the same solution. We delve into this problem and find that the lightweight model is prone to collapse in semantic space when simply performing instance-wise contrast. To address this issue, we propose a relation-wise contrastive paradigm with Relation Knowledge Distillation (ReKD). We introduce a heterogeneous teacher to explicitly mine the semantic information and transferring a novel relation knowledge to the student (lightweight model). The theoretical analysis supports our main concern about instance-wise contrast and verify the effectiveness of our relation-wise contrastive learning. Extensive experimental results also demonstrate that our method achieves significant improvements on multiple lightweight models. Particularly, the linear evaluation on AlexNet obviously improves the current state-of-art from 44.7% to 50.1% , which is the first work to get close to the supervised (50.5%). Code will be made available.

Cite

Text

Zheng et al. "Boosting Contrastive Learning with Relation Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I3.20262

Markdown

[Zheng et al. "Boosting Contrastive Learning with Relation Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/zheng2022aaai-boosting/) doi:10.1609/AAAI.V36I3.20262

BibTeX

@inproceedings{zheng2022aaai-boosting,
  title     = {{Boosting Contrastive Learning with Relation Knowledge Distillation}},
  author    = {Zheng, Kai and Wang, Yuanjiang and Yuan, Ye},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {3508-3516},
  doi       = {10.1609/AAAI.V36I3.20262},
  url       = {https://mlanthology.org/aaai/2022/zheng2022aaai-boosting/}
}