Experience Replay for Continual Learning

Abstract

Interacting with a complex world involves continual learning, in which tasks and data distributions change over time. A continual learning system should demonstrate both plasticity (acquisition of new knowledge) and stability (preservation of old knowledge). Catastrophic forgetting is the failure of stability, in which new experience overwrites previous experience. In the brain, replay of past experience is widely believed to reduce forgetting, yet it has been largely overlooked as a solution to forgetting in deep reinforcement learning. Here, we introduce CLEAR, a replay-based method that greatly reduces catastrophic forgetting in multi-task reinforcement learning. CLEAR leverages off-policy learning and behavioral cloning from replay to enhance stability, as well as on-policy learning to preserve plasticity. We show that CLEAR performs better than state-of-the-art deep learning techniques for mitigating forgetting, despite being significantly less complicated and not requiring any knowledge of the individual tasks being learned.

Cite

Text

Rolnick et al. "Experience Replay for Continual Learning." Neural Information Processing Systems, 2019.

Markdown

[Rolnick et al. "Experience Replay for Continual Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/rolnick2019neurips-experience/)

BibTeX

@inproceedings{rolnick2019neurips-experience,
  title     = {{Experience Replay for Continual Learning}},
  author    = {Rolnick, David and Ahuja, Arun and Schwarz, Jonathan and Lillicrap, Timothy and Wayne, Gregory},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {350-360},
  url       = {https://mlanthology.org/neurips/2019/rolnick2019neurips-experience/}
}