Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

Abstract

Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients. This paper focuses on rehearsal-free FCL, which has severe forgetting issues when learning new tasks due to the lack of access to historical task data. To address this issue, we propose Fed-CPrompt based on prompt learning techniques to obtain task-specific prompts in a communication-efficient way. Fed-CPrompt introduces two key components, asynchronous prompt learning, and contrastive continual loss, to handle asynchronous task arrival and heterogeneous data distributions in FCL, respectively. Extensive experiments demonstrate the effectiveness of Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.

Cite

Text

Bagwe et al. "Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning." ICML 2023 Workshops: FL, 2023.

Markdown

[Bagwe et al. "Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning." ICML 2023 Workshops: FL, 2023.](https://mlanthology.org/icmlw/2023/bagwe2023icmlw-fedcprompt/)

BibTeX

@inproceedings{bagwe2023icmlw-fedcprompt,
  title     = {{Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning}},
  author    = {Bagwe, Gaurav and Yuan, Xiaoyong and Pan, Miao and Zhang, Lan},
  booktitle = {ICML 2023 Workshops: FL},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/bagwe2023icmlw-fedcprompt/}
}