Continual Learning via Sequential Function-Space Variational Inference

Abstract

Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data. However, applying it to neural networks has proved challenging in practice. Addressing the drawbacks of existing techniques, we propose an optimization objective derived by formulating continual learning as sequential function-space variational inference. In contrast to existing methods that regularize neural network parameters directly, this objective allows parameters to vary widely during training, enabling better adaptation to new tasks. Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions and more effective regularization. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.

Cite

Text

Rudner et al. "Continual Learning via Sequential Function-Space Variational Inference." International Conference on Machine Learning, 2022.

Markdown

[Rudner et al. "Continual Learning via Sequential Function-Space Variational Inference." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/rudner2022icml-continual/)

BibTeX

@inproceedings{rudner2022icml-continual,
  title     = {{Continual Learning via Sequential Function-Space Variational Inference}},
  author    = {Rudner, Tim G. J. and Bickford Smith, Freddie and Feng, Qixuan and Teh, Yee Whye and Gal, Yarin},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {18871-18887},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/rudner2022icml-continual/}
}