Building a Subspace of Policies for Scalable Continual Learning

Abstract

The ability to continuously acquire new knowledge and skills is crucial for autonomous agents. Existing methods are typically based on either fixed-size models that struggle to learn a large number of diverse behaviors, or growing-size models that scale poorly with the number of tasks. In this work, we aim to strike a better balance between scalability and performance by designing a method whose size grows adaptively depending on the task sequence. We introduce Continual Subspace of Policies (CSP), a new approach that incrementally builds a subspace of policies for training a reinforcement learning agent on a sequence of tasks. The subspace's high expressivity allows CSP to perform well for many different tasks while growing more slowly than the number of tasks. Our method does not suffer from forgetting and also displays positive transfer to new tasks. CSP outperforms a number of popular baselines on a wide range of scenarios from two challenging domains, Brax (locomotion) and Continual World (robotic manipulation). Interactive visualizations of the subspace can be found at https://share.streamlit.io/continual-subspace/policies/main.

Cite

Text

Gaya et al. "Building a Subspace of Policies for Scalable Continual Learning." International Conference on Learning Representations, 2023.

Markdown

[Gaya et al. "Building a Subspace of Policies for Scalable Continual Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/gaya2023iclr-building/)

BibTeX

@inproceedings{gaya2023iclr-building,
  title     = {{Building a Subspace of Policies for Scalable Continual Learning}},
  author    = {Gaya, Jean-Baptiste and Doan, Thang and Caccia, Lucas and Soulier, Laure and Denoyer, Ludovic and Raileanu, Roberta},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/gaya2023iclr-building/}
}