One-Shot GAN: Learning to Generate Samples from Single Images and Videos

Abstract

Training GANs in low-data regimes remains a challenge, as overfitting often leads to memorization or training divergence. In this work, we introduce One-Shot GAN that can learn to generate samples from a training set as little as one image or one video. We propose a two-branch discriminator, with content and layout branches designed to judge the internal content separately from the scene layout realism. This allows synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single-image GAN models, One-Shot GAN achieves higher diversity and quality of synthesis. It is also not restricted to the single image setting, successfully learning in the introduced setting of a single video.

Cite

Text

Sushko et al. "One-Shot GAN: Learning to Generate Samples from Single Images and Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00293

Markdown

[Sushko et al. "One-Shot GAN: Learning to Generate Samples from Single Images and Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/sushko2021cvprw-oneshot/) doi:10.1109/CVPRW53098.2021.00293

BibTeX

@inproceedings{sushko2021cvprw-oneshot,
  title     = {{One-Shot GAN: Learning to Generate Samples from Single Images and Videos}},
  author    = {Sushko, Vadim and Gall, Jürgen and Khoreva, Anna},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {2596-2600},
  doi       = {10.1109/CVPRW53098.2021.00293},
  url       = {https://mlanthology.org/cvprw/2021/sushko2021cvprw-oneshot/}
}