Few Shot Image Generation via Implicit Autoencoding of Support Sets

Abstract

Recent generative models such as generative adversarial networks have achieved remarkable success in generating realistic images, but they require large training datasets and computational resources. The goal of few-shot image generation is to learn the distribution of a new dataset from only a handful of examples by transferring knowledge learned from structurally similar datasets. Towards achieving this goal, we propose the “Implicit Support Set Autoencoder” (ISSA) that adversarially learns the relationship across datasets using an unsupervised dataset representation, while the distribution of each individual dataset is learned using implicit distributions. Given a few examples from a new dataset, ISSA can generate new samples by inferring the representation of the underlying distribution using a single forward pass. We showcase significant gains from our method on generating high quality and diverse images for unseen classes in the Omniglot and CelebA datasets in few-shot image generation settings.

Cite

Text

Huang et al. "Few Shot Image Generation via Implicit Autoencoding of Support Sets." NeurIPS 2021 Workshops: MetaLearn, 2021.

Markdown

[Huang et al. "Few Shot Image Generation via Implicit Autoencoding of Support Sets." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/huang2021neuripsw-few/)

BibTeX

@inproceedings{huang2021neuripsw-few,
  title     = {{Few Shot Image Generation via Implicit Autoencoding of Support Sets}},
  author    = {Huang, Andy and Wang, Kuan-Chieh and Rabusseau, Guillaume and Makhzani, Alireza},
  booktitle = {NeurIPS 2021 Workshops: MetaLearn},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/huang2021neuripsw-few/}
}