C-CLIP: Multimodal Continual Learning for Vision-Language Model

Abstract

Multimodal pre-trained models like CLIP need large image-text pairs for training but often struggle with domain-specific tasks. Since retraining with specialized and historical data incurs significant memory and time costs, it is important to continually learn new domains in the open world while preserving original performance. However, current continual learning research mainly focuses on single-modal scenarios, and the evaluation criteria are insufficient without considering image-text matching performance and the forgetting of zero-shot performance. This work introduces image-caption datasets from various domains and establishes a multimodal vision-language continual learning benchmark. Then, a novel framework named C-CLIP is proposed, which not only prevents forgetting but also enhances new task learning impressively. Comprehensive experiments demonstrate that our method has strong continual learning ability across different domain image-text datasets, and has little forgetting of the original capabilities of zero-shot prediction, significantly outperforming existing methods.

Cite

Text

Liu et al. "C-CLIP: Multimodal Continual Learning for Vision-Language Model." International Conference on Learning Representations, 2025.

Markdown

[Liu et al. "C-CLIP: Multimodal Continual Learning for Vision-Language Model." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liu2025iclr-cclip/)

BibTeX

@inproceedings{liu2025iclr-cclip,
  title     = {{C-CLIP: Multimodal Continual Learning for Vision-Language Model}},
  author    = {Liu, Wenzhuo and Zhu, Fei and Wei, Longhui and Tian, Qi},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/liu2025iclr-cclip/}
}