Studying Generalization on Memory-Based Methods in Continual Learning
Abstract
One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting. To mitigate complete knowledge overwriting, memory-based methods store a percentage of previous data distributions to be used during training. Although these methods produce good results, few studies have tested their out-of-distribution generalization properties, as well as whether these methods overfit the replay memory. In this work, we show that although these methods can help in traditional in-distribution generalization, they can strongly impair out-of-distribution generalization by learning spurious features and correlations. Using a controlled environment, using the Synbol benchmark generator (Lacoste et al., 2020), we demonstrate that this lack of out-of-distribution generalization mainly occurs in the linear classifier.
Cite
Text
del Rio et al. "Studying Generalization on Memory-Based Methods in Continual Learning." ICML 2023 Workshops: LXAI_Regular_Deadline, 2023.Markdown
[del Rio et al. "Studying Generalization on Memory-Based Methods in Continual Learning." ICML 2023 Workshops: LXAI_Regular_Deadline, 2023.](https://mlanthology.org/icmlw/2023/delrio2023icmlw-studying/)BibTeX
@inproceedings{delrio2023icmlw-studying,
title = {{Studying Generalization on Memory-Based Methods in Continual Learning}},
author = {del Rio, Felipe and Hurtado, Julio and Calderon, Cristian Buc and Soto, Alvaro and Lomonaco, Vincenzo},
booktitle = {ICML 2023 Workshops: LXAI_Regular_Deadline},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/delrio2023icmlw-studying/}
}