Attack on Prompt: Backdoor Attack in Prompt-Based Continual Learning
Abstract
Prompt-based approaches offer a cutting-edge solution to data privacy issues in continual learning, particularly in scenarios involving multiple data suppliers where long-term storage of private user data is prohibited. Despite delivering state-of-the-art performance, its impressive remembering capability can become a double-edged sword, raising security concerns as it might inadvertently retain poisoned knowledge injected during learning from private user data. Following this insight, in this paper, we expose continual learning to a potential threat: backdoor attack, which drives the model to follow a desired adversarial target whenever a specific trigger is present while still performing normally on clean samples. We highlight three critical challenges in executing backdoor attacks on incremental learners and propose corresponding solutions: (1) Transferability: We employ a surrogate dataset and manipulate prompt selection to transfer backdoor knowledge to data from other suppliers; (2) Resiliency: We simulate static and dynamic states of the victim to ensure the backdoor trigger remains robust during intense incremental learning processes; and (3) Authenticity: We apply binary cross-entropy loss as an anti-cheating factor to prevent the backdoor trigger from devolving into adversarial noise. Extensive experiments across various benchmark datasets and continual learners validate our continual backdoor framework, with further ablation studies confirming our contributions' effectiveness.
Cite
Text
Nguyen et al. "Attack on Prompt: Backdoor Attack in Prompt-Based Continual Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I18.34168Markdown
[Nguyen et al. "Attack on Prompt: Backdoor Attack in Prompt-Based Continual Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/nguyen2025aaai-attack/) doi:10.1609/AAAI.V39I18.34168BibTeX
@inproceedings{nguyen2025aaai-attack,
title = {{Attack on Prompt: Backdoor Attack in Prompt-Based Continual Learning}},
author = {Nguyen, Trang and Tran, Anh and Ho, Nhat},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {19686-19694},
doi = {10.1609/AAAI.V39I18.34168},
url = {https://mlanthology.org/aaai/2025/nguyen2025aaai-attack/}
}