Flashback for Continual Learning
Abstract
To strike a delicate balance between model stability and plasticity of continual learning, previous approaches have adopted strategies to guide model updates on new data to preserve old knowledge while implicitly absorbing new information through task objective function (e.g. classification loss). However, our goal is to achieve this balance more explicitly, proposing a bi-directional regularization that guides the model in preserving existing knowledge and actively absorbing new knowledge. To address this, we propose the Flashback Learning (FL) algorithm, a two-stage training approach that seamlessly integrates with diverse methods from different continual learning categories. FL creates two knowledge bases; one with high plasticity to control learning and one conservative to prevent forgetting, then it guides the model update using these two knowledge bases. FL significantly improves baseline methods on common image classification datasets such as CIFAR-10, CIFAR-100, and Tiny ImageNet in various settings.
Cite
Text
Mahmoodi et al. "Flashback for Continual Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00368Markdown
[Mahmoodi et al. "Flashback for Continual Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/mahmoodi2023iccvw-flashback/) doi:10.1109/ICCVW60793.2023.00368BibTeX
@inproceedings{mahmoodi2023iccvw-flashback,
title = {{Flashback for Continual Learning}},
author = {Mahmoodi, Leila and Harandi, Mehrtash and Moghadam, Peyman},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2023},
pages = {3426-3435},
doi = {10.1109/ICCVW60793.2023.00368},
url = {https://mlanthology.org/iccvw/2023/mahmoodi2023iccvw-flashback/}
}