Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity

Abstract

Intrinsic reward functions are widely used to improve exploration in reinforcement learning. We first examine the conditions and causes of catastrophic forgetting of the intrinsic reward function, and propose a new method, FARCuriosity, inspired by how humans and non-human animals learn. The method depends on fragmentation and recall: an agent fragments an environment based on surprisal signals, and uses different local curiosity modules (prediction-based intrinsic reward functions) for each fragment so that modules are not trained on the entire environment. At each fragmentation event, the agent stores the current module in long-term memory (LTM) and either initializes a new module or recalls a previously stored module based on its match with the current state. With fragmentation and recall, FARCuriosity achieves less forgetting and better overall performance in games with varied and heterogeneous environments in the Atari benchmark suite of tasks. Thus, this work highlights the problem of catastrophic forgetting in prediction-based curiosity methods and proposes a first solution.

Cite

Text

Hwang et al. "Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity." NeurIPS 2023 Workshops: IMOL, 2023.

Markdown

[Hwang et al. "Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity." NeurIPS 2023 Workshops: IMOL, 2023.](https://mlanthology.org/neuripsw/2023/hwang2023neuripsw-neuroinspired/)

BibTeX

@inproceedings{hwang2023neuripsw-neuroinspired,
  title     = {{Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity}},
  author    = {Hwang, Jaedong and Hong, Zhang-Wei and Chen, Eric and Boopathy, Akhilan and Agrawal, Pulkit and Fiete, Ila},
  booktitle = {NeurIPS 2023 Workshops: IMOL},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/hwang2023neuripsw-neuroinspired/}
}