Variational Task Encoders for Model-Agnostic Meta-Learning

Abstract

Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on novel tasks. A critical challenge lies in the inherent uncertainty about whether new tasks can be considered similar to those observed before, and robust meta-learning methods would ideally reason about this to produce corresponding uncertainty estimates. We extend model-agnostic meta-learning with variational inference: we model the identity of new tasks as a latent random variable, which modulates the fine-tuning of meta-learned neural networks. Our approach requires little additional computation and doesn't make strong assumptions about the distribution of the neural network weights, and allows the algorithm to generalize to more divergent task distributions, resulting in better-calibrated uncertainty measures while maintaining accurate predictions.

Cite

Text

Schagen and Vanschoren. "Variational Task Encoders for Model-Agnostic Meta-Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.

Markdown

[Schagen and Vanschoren. "Variational Task Encoders for Model-Agnostic Meta-Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/schagen2021neuripsw-variational/)

BibTeX

@inproceedings{schagen2021neuripsw-variational,
  title     = {{Variational Task Encoders for Model-Agnostic Meta-Learning}},
  author    = {Schagen, Luuk and Vanschoren, Joaquin},
  booktitle = {NeurIPS 2021 Workshops: MetaLearn},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/schagen2021neuripsw-variational/}
}