Self-Composing Policies for Scalable Continual Reinforcement Learning

Abstract

This work introduces a growable and modular neural network architecture that naturally avoids catastrophic forgetting and interference in continual reinforcement learning. The structure of each module allows the selective combination of previous policies along with its internal policy accelerating the learning process on the current task. Unlike previous growing neural network approaches, we show that the number of parameters of the proposed approach grows linearly with respect to the number of tasks, and does not sacrifice plasticity to scale. Experiments conducted in benchmark continuous control and visual problems reveal that the proposed approach achieves greater knowledge transfer and performance than alternative methods.

Cite

Text

Malagon et al. "Self-Composing Policies for Scalable Continual Reinforcement Learning." International Conference on Machine Learning, 2024.

Markdown

[Malagon et al. "Self-Composing Policies for Scalable Continual Reinforcement Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/malagon2024icml-selfcomposing/)

BibTeX

@inproceedings{malagon2024icml-selfcomposing,
  title     = {{Self-Composing Policies for Scalable Continual Reinforcement Learning}},
  author    = {Malagon, Mikel and Ceberio, Josu and Lozano, Jose A.},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {34432-34460},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/malagon2024icml-selfcomposing/}
}