Bi-Objective Continual Learning: Learning 'New' While Consolidating 'Known'
Abstract
In this paper, we propose a novel single-task continual learning framework named Bi-Objective Continual Learning (BOCL). BOCL aims at both consolidating historical knowledge and learning from new data. On one hand, we propose to preserve the old knowledge using a small set of pillars, and develop the pillar consolidation (PLC) loss to preserve the old knowledge and to alleviate the catastrophic forgetting problem. On the other hand, we develop the contrastive pillar (CPL) loss term to improve the classification performance, and examine several data sampling strategies for efficient onsite learning from ‘new’ with a reasonable amount of computational resources. Comprehensive experiments on CIFAR10/100, CORe50 and a subset of ImageNet validate the BOCL framework. We also reveal the performance accuracy of different sampling strategies when used to finetune a given CNN model. The code will be released.
Cite
Text
Tao et al. "Bi-Objective Continual Learning: Learning 'New' While Consolidating 'Known'." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6060Markdown
[Tao et al. "Bi-Objective Continual Learning: Learning 'New' While Consolidating 'Known'." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/tao2020aaai-bi/) doi:10.1609/AAAI.V34I04.6060BibTeX
@inproceedings{tao2020aaai-bi,
title = {{Bi-Objective Continual Learning: Learning 'New' While Consolidating 'Known'}},
author = {Tao, Xiaoyu and Hong, Xiaopeng and Chang, Xinyuan and Gong, Yihong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {5989-5996},
doi = {10.1609/AAAI.V34I04.6060},
url = {https://mlanthology.org/aaai/2020/tao2020aaai-bi/}
}