WeditGAN: Few-Shot Image Generation via Latent Space Relocation

Abstract

In few-shot image generation, directly training GAN models on just a handful of images faces the risk of overfitting. A popular solution is to transfer the models pretrained on large source domains to small target ones. In this work, we introduce WeditGAN, which realizes model transfer by editing the intermediate latent codes w in StyleGANs with learned constant offsets (delta w), discovering and constructing target latent spaces via simply relocating the distribution of source latent spaces. The established one-to-one mapping between latent spaces can naturally prevents mode collapse and overfitting. Besides, we also propose variants of WeditGAN to further enhance the relocation process by regularizing the direction or finetuning the intensity of delta w. Experiments on a collection of widely used source/target datasets manifest the capability of WeditGAN in generating realistic and diverse images, which is simple yet highly effective in the research area of few-shot image generation. Codes are available at https://github.com/Ldhlwh/WeditGAN.

Cite

Text

Duan et al. "WeditGAN: Few-Shot Image Generation via Latent Space Relocation." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I2.27932

Markdown

[Duan et al. "WeditGAN: Few-Shot Image Generation via Latent Space Relocation." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/duan2024aaai-weditgan/) doi:10.1609/AAAI.V38I2.27932

BibTeX

@inproceedings{duan2024aaai-weditgan,
  title     = {{WeditGAN: Few-Shot Image Generation via Latent Space Relocation}},
  author    = {Duan, Yuxuan and Niu, Li and Hong, Yan and Zhang, Liqing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {1653-1661},
  doi       = {10.1609/AAAI.V38I2.27932},
  url       = {https://mlanthology.org/aaai/2024/duan2024aaai-weditgan/}
}