Adaptive Feature Interpolation for Low-Shot Image Generation
Abstract
Training of generative models especially Generative Adversarial Networks can easily diverge in low-data setting. To mitigate this issue, we propose a novel implicit data augmentation approach which facilitates stable training and synthesize high-quality samples without need of label information. Specifically, we view the discriminator as a metric embedding of the real data manifold, which offers proper distances between real data points. We then utilize information in the feature space to develop a fully unsupervised and data-driven augmentation method. Experiments on few-shot generation tasks show the proposed method significantly improve results from strong baselines with hundreds of training samples.
Cite
Text
Dai et al. "Adaptive Feature Interpolation for Low-Shot Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19784-0_15Markdown
[Dai et al. "Adaptive Feature Interpolation for Low-Shot Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/dai2022eccv-adaptive/) doi:10.1007/978-3-031-19784-0_15BibTeX
@inproceedings{dai2022eccv-adaptive,
title = {{Adaptive Feature Interpolation for Low-Shot Image Generation}},
author = {Dai, Mengyu and Hang, Haibin and Guo, Xiaoyang},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19784-0_15},
url = {https://mlanthology.org/eccv/2022/dai2022eccv-adaptive/}
}