MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning

Abstract

Class-Incremental Learning (CIL) requires models to continually acquire knowledge of new classes without forgetting old ones. Despite Pre-trained Models (PTMs) have shown excellent performance in CIL, catastrophic forgetting still occurs as the model learns new concepts. Existing work seeks to utilize lightweight components to adjust the PTM, while the forgetting phenomenon still comes from parameter and retrieval levels. Specifically, iterative updates of the model result in parameter drift, while mistakenly retrieving irrelevant modules leads to the mismatch during inference. To this end, we propose MOdel Surgery (MOS) to rescue the model from forgetting previous knowledge. By training task-specific adapters, we continually adjust the PTM to downstream tasks. To mitigate parameter-level forgetting, we present an adapter merging approach to learn task-specific adapters, which aims to bridge the gap between different components while reserve task-specific information. Besides, to address retrieval-level forgetting, we introduce a training-free self-refined adapter retrieval mechanism during inference, which leverages the model's inherent ability for better adapter retrieval. By jointly rectifying the model with those steps, MOS can robustly resist catastrophic forgetting in the learning process. Extensive experiments on seven benchmark datasets validate MOS's state-of-the-art performance.

Cite

Text

Sun et al. "MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34281

Markdown

[Sun et al. "MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/sun2025aaai-mos/) doi:10.1609/AAAI.V39I19.34281

BibTeX

@inproceedings{sun2025aaai-mos,
  title     = {{MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning}},
  author    = {Sun, Hai-Long and Zhou, Da-Wei and Zhao, Hanbin and Gan, Le and Zhan, De-Chuan and Ye, Han-Jia},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {20699-20707},
  doi       = {10.1609/AAAI.V39I19.34281},
  url       = {https://mlanthology.org/aaai/2025/sun2025aaai-mos/}
}