Continual Learning with Bayesian Neural Networks for Non-Stationary Data

Abstract

This work addresses continual learning for non-stationary data, using Bayesian neural networks and memory-based online variational Bayes. We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data. This raw data corresponds to likelihood terms that cannot be well approximated by the Gaussian. We introduce a novel method for sequentially updating both components of the posterior approximation. Furthermore, we propose Bayesian forgetting and a Gaussian diffusion process for adapting to non-stationary data. The experimental results show that our update method improves on existing approaches for streaming data. Additionally, the adaptation methods lead to better predictive performance for non-stationary data.

Cite

Text

Kurle et al. "Continual Learning with Bayesian Neural Networks for Non-Stationary Data." International Conference on Learning Representations, 2020.

Markdown

[Kurle et al. "Continual Learning with Bayesian Neural Networks for Non-Stationary Data." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/kurle2020iclr-continual/)

BibTeX

@inproceedings{kurle2020iclr-continual,
  title     = {{Continual Learning with Bayesian Neural Networks for Non-Stationary Data}},
  author    = {Kurle, Richard and Cseke, Botond and Klushyn, Alexej and van der Smagt, Patrick and Günnemann, Stephan},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/kurle2020iclr-continual/}
}