Gradient-EM Bayesian Meta-Learning
Abstract
Bayesian meta-learning enables robust and fast adaptation to new tasks with uncertainty assessment. The key idea behind Bayesian meta-learning is empirical Bayes inference of hierarchical model. In this work, we extend this framework to include a variety of existing methods, before proposing our variant based on gradient-EM algorithm. Our method improves computational efficiency by avoiding back-propagation computation in the meta-update step, which is exhausting for deep neural networks. Furthermore, it provides flexibility to the inner-update optimization procedure by decoupling it from meta-update. Experiments on sinusoidal regression, few-shot image classification, and policy-based reinforcement learning show that our method not only achieves better accuracy with less computation cost, but is also more robust to uncertainty.
Cite
Text
Zou and Lu. "Gradient-EM Bayesian Meta-Learning." Neural Information Processing Systems, 2020.Markdown
[Zou and Lu. "Gradient-EM Bayesian Meta-Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zou2020neurips-gradientem/)BibTeX
@inproceedings{zou2020neurips-gradientem,
title = {{Gradient-EM Bayesian Meta-Learning}},
author = {Zou, Yayi and Lu, Xiaoqi},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/zou2020neurips-gradientem/}
}