Adversarial Continual Learning

Abstract

Continual learning aims to learn new tasks without forgetting previously learned ones. We hypothesize that representations learned to solve each task in a sequence have a shared structure while containing some task-specific properties. We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks. Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills. We demonstrate our hybrid approach is effective in avoiding forgetting and show it is superior to both architecture-based and memory-based approaches on class incrementally learning of a single dataset as well as a sequence of multiple datasets in image classification. Our code is available at https://github.com/facebookresearch/Adversarial-Continual-Learning

Cite

Text

Ebrahimi et al. "Adversarial Continual Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58621-8_23

Markdown

[Ebrahimi et al. "Adversarial Continual Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/ebrahimi2020eccv-adversarial/) doi:10.1007/978-3-030-58621-8_23

BibTeX

@inproceedings{ebrahimi2020eccv-adversarial,
  title     = {{Adversarial Continual Learning}},
  author    = {Ebrahimi, Sayna and Meier, Franziska and Calandra, Roberto and Darrell, Trevor and Rohrbach, Marcus},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58621-8_23},
  url       = {https://mlanthology.org/eccv/2020/ebrahimi2020eccv-adversarial/}
}