Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning
Abstract
We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past. Specifically, for the InfoNCE-based contrastive learning methods, we define symmetric pseudo-negatives obtained from current and previous models and use them in both main and regularization loss terms. Furthermore, we extend this idea to non-contrastive learning methods which do not inherently rely on negatives. For these methods, a pseudo-negative is defined as the output from the previous model for a differently augmented version of the anchor sample and is asymmetrically applied to the regularization term. Extensive experimental results demonstrate that our PNR framework achieves state-of-the-art performance in representation learning during CSSL by effectively balancing the trade-off between plasticity and stability.
Cite
Text
Cha et al. "Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning." International Conference on Machine Learning, 2024.Markdown
[Cha et al. "Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cha2024icml-regularizing/)BibTeX
@inproceedings{cha2024icml-regularizing,
title = {{Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning}},
author = {Cha, Sungmin and Cho, Kyunghyun and Moon, Taesup},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {6048-6065},
volume = {235},
url = {https://mlanthology.org/icml/2024/cha2024icml-regularizing/}
}