ReMix: Towards Image-to-Image Translation with Limited Data

Abstract

Image-to-image (I2I) translation methods based on generative adversarial networks (GANs) typically suffer from overfitting when limited training data is available. In this work, we propose a data augmentation method (ReMix) to tackle this issue. We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples. The generator learns to translate the in-between samples rather than memorizing the training set, and thereby forces the discriminator to generalize. The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results. The ReMix method can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ReMix method achieve significant improvements.

Cite

Text

Cao et al. "ReMix: Towards Image-to-Image Translation with Limited Data." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01477

Markdown

[Cao et al. "ReMix: Towards Image-to-Image Translation with Limited Data." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/cao2021cvpr-remix/) doi:10.1109/CVPR46437.2021.01477

BibTeX

@inproceedings{cao2021cvpr-remix,
  title     = {{ReMix: Towards Image-to-Image Translation with Limited Data}},
  author    = {Cao, Jie and Hou, Luanxuan and Yang, Ming-Hsuan and He, Ran and Sun, Zhenan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {15018-15027},
  doi       = {10.1109/CVPR46437.2021.01477},
  url       = {https://mlanthology.org/cvpr/2021/cao2021cvpr-remix/}
}