Integrating Task-Specific and Universal Adapters for Pre-Trained Model-Based Class-Incremental Learning
Abstract
Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Existing pre-trained model-based CIL methods often freeze the pre-trained network and adapt to incremental tasks using additional lightweight modules such as adapters. However, incorrect module selection during inference hurts performance, and task-specific modules often overlook shared general knowledge, leading to errors on distinguishing between similar classes across tasks. To address the aforementioned challenges, we propose integrating Task-Specific and Universal Adapters (TUNA) in this paper. Specifically, we train task-specific adapters to capture the most crucial features relevant to their respective tasks and introduce an entropy-based selection mechanism to choose the most suitable adapter. Furthermore, we leverage an adapter fusion strategy to construct a universal adapter, which encodes the most discriminative features shared across tasks. We combine task-specific and universal adapter predictions to harness both specialized and general knowledge during inference. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of our approach. Code is available at https://github.com/LAMDA-CL/ICCV2025-TUNA
Cite
Text
Wang et al. "Integrating Task-Specific and Universal Adapters for Pre-Trained Model-Based Class-Incremental Learning." International Conference on Computer Vision, 2025.Markdown
[Wang et al. "Integrating Task-Specific and Universal Adapters for Pre-Trained Model-Based Class-Incremental Learning." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/wang2025iccv-integrating/)BibTeX
@inproceedings{wang2025iccv-integrating,
title = {{Integrating Task-Specific and Universal Adapters for Pre-Trained Model-Based Class-Incremental Learning}},
author = {Wang, Yan and Zhou, Da-Wei and Ye, Han-Jia},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {806-816},
url = {https://mlanthology.org/iccv/2025/wang2025iccv-integrating/}
}