Meta-FAVAE: Toward Fast and Diverse Few-Shot Image Generation via Meta-Learning and Feedback Augmented Adversarial VAE

Abstract

Learning to synthesis realistic images of new categories based on just one or a few examples is a challenge task for deep generative models, which usually require to train with a large amount of data. In this work, we propose a data efficient meta-learning framework for fast adapting to few-shot image generation task with an adversarial variational auto-encoder and feedback augmentation strategy. By training the model as a meta-learner, our method can adapt faster to the new task with significant reduction of model parameters. We designed a novel feedback augmented adversarial variational auto-encoder. This model learns to synthesize new samples for an unseen category just by seeing few examples from it and the generated interpolated samples are then used in feedback loop to expand the inputs for encoder to train the model, which can effectively increase the diversity of decoder output and prevent the model collapse. Additionally, this method can also generalize to adapt to more complex color image generation tasks.

Cite

Text

Ying et al. "Meta-FAVAE: Toward Fast and Diverse Few-Shot Image Generation via Meta-Learning and Feedback Augmented Adversarial VAE." ICLR 2022 Workshops: DGM4HSD, 2022.

Markdown

[Ying et al. "Meta-FAVAE: Toward Fast and Diverse Few-Shot Image Generation via Meta-Learning and Feedback Augmented Adversarial VAE." ICLR 2022 Workshops: DGM4HSD, 2022.](https://mlanthology.org/iclrw/2022/ying2022iclrw-metafavae/)

BibTeX

@inproceedings{ying2022iclrw-metafavae,
  title     = {{Meta-FAVAE: Toward Fast and Diverse Few-Shot Image Generation via Meta-Learning and Feedback Augmented Adversarial VAE}},
  author    = {Ying, Fangli and Phaphuangwittayakul, Aniwat and Guo, Yi and Huang, Xiaoyue and Wang, Yue},
  booktitle = {ICLR 2022 Workshops: DGM4HSD},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/ying2022iclrw-metafavae/}
}