Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-Network

Abstract

Contrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and label information. The augmentation enables contrast with another view of a single image but enlarges training time and memory usage. To exploit the strength of multi-views while avoiding the high computation cost, we introduce a multi-exit architecture that outputs multiple features of a single image in a single-viewed framework. To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network. The multi-exit architecture efficiently replaces multi-augmented images and leverages various information from different layers of a network. We demonstrate that SelfCon learning improves the classification performance of the encoder network, and empirically analyze its advantages in terms of the single-view and the sub-network. Furthermore, we provide theoretical evidence of the performance increase based on the mutual information bound. For ImageNet classification on ResNet-50, SelfCon improves accuracy by +0.6% with 59% memory and 48% time of Supervised Contrastive learning, and a simple ensemble of multi-exit outputs boosts performance up to +1.5%. Our code is available at https://github.com/raymin0223/self-contrastive-learning.

Cite

Text

Bae et al. "Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-Network." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I1.25091

Markdown

[Bae et al. "Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-Network." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/bae2023aaai-self/) doi:10.1609/AAAI.V37I1.25091

BibTeX

@inproceedings{bae2023aaai-self,
  title     = {{Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-Network}},
  author    = {Bae, Sangmin and Kim, Sungnyun and Ko, Jongwoo and Lee, Gihun and Noh, Seungjong and Yun, Se-Young},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {197-205},
  doi       = {10.1609/AAAI.V37I1.25091},
  url       = {https://mlanthology.org/aaai/2023/bae2023aaai-self/}
}