Few-Shot Cross-Domain Image Generation via Inference-Time Latent-Code Learning
Abstract
In this work, our objective is to adapt a Deep generative model trained on a large-scale source dataset to multiple target domains with scarce data. Specifically, we focus on adapting a pre-trained Generative Adversarial Network (GAN) to a target domain without re-training the generator. Our method draws the motivation from the fact that out-of-distribution samples can be `embedded' onto the latent space of a pre-trained source-GAN. We propose to train a small latent-generation network during the inference stage, each time a batch of target samples is to be generated. These target latent codes are fed to the source-generator to obtain novel target samples. Despite using the same small set of target samples and the source generator, multiple independent training episodes of the latent-generation network results in the diversity of the generated target samples. Our method, albeit simple, can be used to generate data from multiple target distributions using a generator trained on a single source distribution. We demonstrate the efficacy of our surprisingly simple method in generating multiple target datasets with only a single source generator and a few target samples.
Cite
Text
Mondal et al. "Few-Shot Cross-Domain Image Generation via Inference-Time Latent-Code Learning." International Conference on Learning Representations, 2023.Markdown
[Mondal et al. "Few-Shot Cross-Domain Image Generation via Inference-Time Latent-Code Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/mondal2023iclr-fewshot/)BibTeX
@inproceedings{mondal2023iclr-fewshot,
title = {{Few-Shot Cross-Domain Image Generation via Inference-Time Latent-Code Learning}},
author = {Mondal, Arnab Kumar and Tiwary, Piyush and Singla, Parag and Ap, Prathosh},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/mondal2023iclr-fewshot/}
}