One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning

ICML 2024 pp. 24658-24673

Abstract

In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.

Cite

Text

Kim et al. "One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning." International Conference on Machine Learning, 2024.

Markdown

[Kim et al. "One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/kim2024icml-one/)

BibTeX

@inproceedings{kim2024icml-one,
  title     = {{One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning}},
  author    = {Kim, Doyoung and Yoon, Susik and Park, Dongmin and Lee, Youngjun and Song, Hwanjun and Bang, Jihwan and Lee, Jae-Gil},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {24658-24673},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/kim2024icml-one/}
}