Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks
Abstract
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not advantageous, for instance, when tasks are considerably dissimilar or change over time. We use the connection between gradient-based meta-learning and hierarchical Bayes to propose a Dirichlet process mixture of hierarchical Bayesian models over the parameters of an arbitrary parametric model such as a neural network. In contrast to consolidating inductive biases into a single set of hyperparameters, our approach of task-dependent hyperparameter selection better handles latent distribution shift, as demonstrated on a set of evolving, image-based, few-shot learning benchmarks.
Cite
Text
Jerfel et al. "Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks." Neural Information Processing Systems, 2019.Markdown
[Jerfel et al. "Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/jerfel2019neurips-reconciling/)BibTeX
@inproceedings{jerfel2019neurips-reconciling,
title = {{Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks}},
author = {Jerfel, Ghassen and Grant, Erin and Griffiths, Tom and Heller, Katherine A.},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {9122-9133},
url = {https://mlanthology.org/neurips/2019/jerfel2019neurips-reconciling/}
}