Meta-Learning with Latent Embedding Optimization

Abstract

Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.

Cite

Text

Rusu et al. "Meta-Learning with Latent Embedding Optimization." International Conference on Learning Representations, 2019.

Markdown

[Rusu et al. "Meta-Learning with Latent Embedding Optimization." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/rusu2019iclr-metalearning/)

BibTeX

@inproceedings{rusu2019iclr-metalearning,
  title     = {{Meta-Learning with Latent Embedding Optimization}},
  author    = {Rusu, Andrei A. and Rao, Dushyant and Sygnowski, Jakub and Vinyals, Oriol and Pascanu, Razvan and Osindero, Simon and Hadsell, Raia},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/rusu2019iclr-metalearning/}
}