Continual Learning with Recursive Gradient Optimization

Abstract

Learning multiple tasks sequentially without forgetting previous knowledge, called Continual Learning(CL), remains a long-standing challenge for neural networks. Most existing methods rely on additional network capacity or data replay. In contrast, we introduce a novel approach which we refer to as Recursive Gradient Optimization(RGO). RGO is composed of an iteratively updated optimizer that modifies the gradient to minimize forgetting without data replay and a virtual Feature Encoding Layer(FEL) that represents different long-term structures with only task descriptors. Experiments demonstrate that RGO has significantly better performance on popular continual classification benchmarks when compared to the baselines and achieves new state-of-the-art performance on 20-split-CIFAR100(82.22%) and 20-split-miniImageNet(72.63%). With higher average accuracy than Single-Task Learning(STL), this method is flexible and reliable to provide continual learning capabilities for learning models that rely on gradient descent.

Cite

Text

Liu and Liu. "Continual Learning with Recursive Gradient Optimization." International Conference on Learning Representations, 2022.

Markdown

[Liu and Liu. "Continual Learning with Recursive Gradient Optimization." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/liu2022iclr-continual/)

BibTeX

@inproceedings{liu2022iclr-continual,
  title     = {{Continual Learning with Recursive Gradient Optimization}},
  author    = {Liu, Hao and Liu, Huaping},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/liu2022iclr-continual/}
}