Meta-Learning via Classifier(-Free) Guidance

Abstract

We aim to develop meta-learning techniques that achieve higher zero-shot performance than the state of the art on unseen tasks. To do so, we take inspiration from recent advances in generative modeling and language-conditioned image synthesis to propose meta-learning techniques that use natural language guidance for zero-shot task adaptation. We first train an unconditional generative hypernetwork model to produce neural network weights; then we train a second "guidance" model that, given a natural language task description, traverses the hypernetwork latent space to find high-performance task-adapted weights in a zero-shot manner. We explore two alternative approaches for latent space guidance: "HyperCLIP"-based classifier guidance and a conditional Hypernetwork Latent Diffusion Model ("HyperLDM"), which we show to benefit from the classifier-free guidance technique common in image generation. Finally, we demonstrate that our approaches outperform existing meta-learning methods with zero-shot learning experiments on our Meta-VQA dataset.

Cite

Text

Nava et al. "Meta-Learning via Classifier(-Free) Guidance." NeurIPS 2022 Workshops: MetaLearn, 2022.

Markdown

[Nava et al. "Meta-Learning via Classifier(-Free) Guidance." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/nava2022neuripsw-metalearning/)

BibTeX

@inproceedings{nava2022neuripsw-metalearning,
  title     = {{Meta-Learning via Classifier(-Free) Guidance}},
  author    = {Nava, Elvis and Kobayashi, Seijin and Yin, Yifei and Katzschmann, Robert K. and Grewe, Benjamin F},
  booktitle = {NeurIPS 2022 Workshops: MetaLearn},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/nava2022neuripsw-metalearning/}
}