Variational Auto-Regressive Gaussian Processes for Continual Learning
Abstract
Through sequential construction of posteriors on observing data online, Bayes’ theorem provides a natural framework for continual learning. We develop Variational Auto-Regressive Gaussian Processes (VAR-GPs), a principled posterior updating mechanism to solve sequential tasks in continual learning. By relying on sparse inducing point approximations for scalable posteriors, we propose a novel auto-regressive variational distribution which reveals two fruitful connections to existing results in Bayesian inference, expectation propagation and orthogonal inducing points. Mean predictive entropy estimates show VAR-GPs prevent catastrophic forgetting, which is empirically supported by strong performance on modern continual learning benchmarks against competitive baselines. A thorough ablation study demonstrates the efficacy of our modeling choices.
Cite
Text
Kapoor et al. "Variational Auto-Regressive Gaussian Processes for Continual Learning." International Conference on Machine Learning, 2021.Markdown
[Kapoor et al. "Variational Auto-Regressive Gaussian Processes for Continual Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/kapoor2021icml-variational/)BibTeX
@inproceedings{kapoor2021icml-variational,
title = {{Variational Auto-Regressive Gaussian Processes for Continual Learning}},
author = {Kapoor, Sanyam and Karaletsos, Theofanis and Bui, Thang D},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {5290-5300},
volume = {139},
url = {https://mlanthology.org/icml/2021/kapoor2021icml-variational/}
}