Metalearning Continual Learning Algorithms
Abstract
General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural networks, however, suffer from catastrophic forgetting (CF), i.e., previously acquired skills are forgotten when a new task is learned. Instead of hand-crafting new algorithms for avoiding CF, we propose Automated Continual Learning (ACL) to train self-referential neural networks to metalearn their own in-context continual (meta)learning algorithms. ACL encodes continual learning (CL) desiderata---good performance on both old and new tasks---into its metalearning objectives. Our experiments demonstrate that ACL effectively resolves "in-context catastrophic forgetting," a problem that naive in-context learning algorithms suffer from; ACL learned algorithms outperform both hand-crafted learning algorithms and popular meta-continual learning methods on the Split-MNIST benchmark in the replay-free setting, and enables continual learning of diverse tasks consisting of multiple standard image classification datasets. We also discuss the current limitations of in-context CL by comparing ACL with state-of-the-art CL methods that leverage pre-trained models. Overall, we bring several novel perspectives into the long-standing problem of CL.
Cite
Text
Irie et al. "Metalearning Continual Learning Algorithms." Transactions on Machine Learning Research, 2025.Markdown
[Irie et al. "Metalearning Continual Learning Algorithms." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/irie2025tmlr-metalearning/)BibTeX
@article{irie2025tmlr-metalearning,
title = {{Metalearning Continual Learning Algorithms}},
author = {Irie, Kazuki and Csordás, Róbert and Schmidhuber, Jürgen},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/irie2025tmlr-metalearning/}
}