Exploiting Presentative Feature Distributions for Parameter-Efficient Continual Learning of Large Language Models
Abstract
Endowing large language models (LLMs) with continual learning (CL) capacities is practically important, which enables them to dynamically acquire new knowledge over time. Although many effective methods have been proposed for CL of LLMs, they did not consider online scenarios, thereby sharing a common problem: information leakage (IL), where the task-related information of learned tasks is accessed or reused again. IL not only imposes potential risks on data privacy protection but also significantly hinders the deployment of LLMs in real-world scenarios. To avoid IL while maintaining outstanding CL performance, we propose a novel CL method for LLMs, which first characterizes a parameter-efficient fine-tuning (PEFT) block by a presentative feature distribution, and then dynamically selects the appropriate PEFT blocks for each instance based on its similarity with the presentative feature distributions. Extensive experiments validate the effectiveness of our method on the CL of LLM, showcasing its potential to enhance both privacy and adaptability in practical applications.
Cite
Text
Cheng et al. "Exploiting Presentative Feature Distributions for Parameter-Efficient Continual Learning of Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Cheng et al. "Exploiting Presentative Feature Distributions for Parameter-Efficient Continual Learning of Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/cheng2025icml-exploiting/)BibTeX
@inproceedings{cheng2025icml-exploiting,
title = {{Exploiting Presentative Feature Distributions for Parameter-Efficient Continual Learning of Large Language Models}},
author = {Cheng, Xin and Ye, Jiabo and Xu, Haiyang and Yan, Ming and Zhang, Ji and Liu, Feng and Huang, Fei and Feng, Lei},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {10159-10181},
volume = {267},
url = {https://mlanthology.org/icml/2025/cheng2025icml-exploiting/}
}