Continual Learning with Dual Regularizations
Abstract
Continual learning (CL) has received a great amount of attention in recent years and a multitude of continual learning approaches arose. In this paper, we propose a continual learning approach with dual regularizations to alleviate the well-known issue of catastrophic forgetting in a challenging continual learning scenario – domain incremental learning. We reserve a buffer of past examples, dubbed memory set, to retain some information about previous tasks. The key idea is to regularize the learned representation space as well as the model outputs by utilizing the memory set based on interleaving the memory examples into the current training process. We verify our approach on four CL dataset benchmarks. Our experimental results demonstrate that the proposed approach is consistently superior to the compared methods on all benchmarks, especially in the case of small buffer size.
Cite
Text
Han and Guo. "Continual Learning with Dual Regularizations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86486-6_38Markdown
[Han and Guo. "Continual Learning with Dual Regularizations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/han2021ecmlpkdd-continual/) doi:10.1007/978-3-030-86486-6_38BibTeX
@inproceedings{han2021ecmlpkdd-continual,
title = {{Continual Learning with Dual Regularizations}},
author = {Han, Xuejun and Guo, Yuhong},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2021},
pages = {619-634},
doi = {10.1007/978-3-030-86486-6_38},
url = {https://mlanthology.org/ecmlpkdd/2021/han2021ecmlpkdd-continual/}
}