Effective Continual Learning for Text Classification with Lightweight Snapshots
Abstract
Continual learning is known for suffering from catastrophic forgetting, a phenomenon where previously learned concepts are forgotten upon learning new tasks. A natural remedy is to use trained models for old tasks as ‘teachers’ to regularize the update of the current model to prevent such forgetting. However, this requires storing all past models, which is very space-consuming for large models, e.g. BERT, thus impractical in real-world applications. To tackle this issue, we propose to construct snapshots of seen tasks whose key knowledge is captured in lightweight adapters. During continual learning, we transfer knowledge from past snapshots to the current model through knowledge distillation, allowing the current model to review previously learned knowledge while learning new tasks. We also design representation recalibration to better handle the class-incremental setting. Experiments over various task sequences show that our approach effectively mitigates catastrophic forgetting and outperforms all baselines.
Cite
Text
Wang et al. "Effective Continual Learning for Text Classification with Lightweight Snapshots." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I8.26206Markdown
[Wang et al. "Effective Continual Learning for Text Classification with Lightweight Snapshots." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/wang2023aaai-effective/) doi:10.1609/AAAI.V37I8.26206BibTeX
@inproceedings{wang2023aaai-effective,
title = {{Effective Continual Learning for Text Classification with Lightweight Snapshots}},
author = {Wang, Jue and Dong, Dajie and Shou, Lidan and Chen, Ke and Chen, Gang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {10122-10130},
doi = {10.1609/AAAI.V37I8.26206},
url = {https://mlanthology.org/aaai/2023/wang2023aaai-effective/}
}