Enhanced Continual Learning of Vision-Language Models with Model Fusion

Abstract

Vision-Language Models (VLMs) represent a breakthrough in artificial intelligence by integrating visual and textual modalities to achieve impressive zero-shot capabilities. However, VLMs are susceptible to catastrophic forgetting when sequentially fine-tuned on multiple downstream tasks. Existing continual learning methods for VLMs often rely heavily on additional reference datasets, compromise zero-shot performance, or are limited to parameter-efficient fine-tuning scenarios. In this paper, we propose Continual Decoupling-Unifying (ConDU), a novel approach, by introducing model fusion into continual learning for VLMs. ConDU maintains a unified model along with task triggers and prototype sets, employing an iterative process of decoupling task-specific models for previous tasks and unifying them with the model for the newly learned task. Additionally, we introduce an inference strategy for zero-shot scenarios by aggregating predictions from multiple decoupled task-specific models. Extensive experiments across various settings show that ConDU achieves up to a 2\% improvement in average performance across all seen tasks compared to state-of-the-art baselines, while also enhancing zero-shot capabilities relative to the original VLM.

Cite

Text

Gao et al. "Enhanced Continual Learning of Vision-Language Models with Model Fusion." ICLR 2025 Workshops: SCOPE, 2025.

Markdown

[Gao et al. "Enhanced Continual Learning of Vision-Language Models with Model Fusion." ICLR 2025 Workshops: SCOPE, 2025.](https://mlanthology.org/iclrw/2025/gao2025iclrw-enhanced/)

BibTeX

@inproceedings{gao2025iclrw-enhanced,
  title     = {{Enhanced Continual Learning of Vision-Language Models with Model Fusion}},
  author    = {Gao, Haoyuan and Zhang, Zicong and Wei, Yuqi and Zhao, Linglan and Li, Guilin and Li, Yexin and Kong, Linghe and Huang, Weiran},
  booktitle = {ICLR 2025 Workshops: SCOPE},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/gao2025iclrw-enhanced/}
}