DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype
Abstract
Domain-Incremental Learning (DIL) enables vision models to adapt to changing conditions in real-world environments while maintaining the knowledge acquired from previous domains. Given privacy concerns and training time, Rehearsal-Free DIL (RFDIL) is more practical. Inspired by the incremental cognitive process of the human brain, we design Dual-level Concept Prototypes (DualCP) for each class to address the conflict between learning new knowledge and retaining old knowledge in RFDIL. To construct DualCP, we propose a Concept Prototype Generator (CPG) that generates both coarse-grained and fine-grained prototypes for each class. Additionally, we introduce a Coarse-to-Fine calibrator (C2F) to align image features with DualCP. Finally, we propose a Dual Dot-Regression (DDR) loss function to optimize our C2F module. Extensive experiments on the DomainNet, CDDB, and CORe50 datasets demonstrate the effectiveness of our method.
Cite
Text
Wang et al. "DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I20.35418Markdown
[Wang et al. "DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-dualcp/) doi:10.1609/AAAI.V39I20.35418BibTeX
@inproceedings{wang2025aaai-dualcp,
title = {{DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype}},
author = {Wang, Qiang and He, Yuhang and Dong, Songlin and Song, Xiang and Han, Jizhou and Luo, Haoyu and Gong, Yihong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {21198-21206},
doi = {10.1609/AAAI.V39I20.35418},
url = {https://mlanthology.org/aaai/2025/wang2025aaai-dualcp/}
}