Meta-Learning Probabilistic Inference for Prediction
Abstract
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. 2) We introduce \Versa{}, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. \Versa{} substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training. 3) We evaluate \Versa{} on benchmark datasets where the method sets new state-of-the-art results, and can handle arbitrary number of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task.
Cite
Text
Gordon et al. "Meta-Learning Probabilistic Inference for Prediction." International Conference on Learning Representations, 2019.Markdown
[Gordon et al. "Meta-Learning Probabilistic Inference for Prediction." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/gordon2019iclr-metalearning/)BibTeX
@inproceedings{gordon2019iclr-metalearning,
title = {{Meta-Learning Probabilistic Inference for Prediction}},
author = {Gordon, Jonathan and Bronskill, John and Bauer, Matthias and Nowozin, Sebastian and Turner, Richard},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/gordon2019iclr-metalearning/}
}