SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation

Abstract

For unsupervised image-to-image translation, we propose a discriminator architecture which focuses on the statistical features instead of individual patches. The network is stabilized by distribution matching of key statistical features at multiple scales. Unlike the existing methods which impose more and more constraints on the generator, our method facilitates the shape deformation and enhances the fine details with a greatly simplified framework. We show that the proposed method outperforms the existing state-of-the-art models in various challenging applications including selfie-to-anime, male-to-female and glasses removal.

Cite

Text

Shao and Zhang. "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00648

Markdown

[Shao and Zhang. "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/shao2021iccv-spatchgan/) doi:10.1109/ICCV48922.2021.00648

BibTeX

@inproceedings{shao2021iccv-spatchgan,
  title     = {{SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation}},
  author    = {Shao, Xuning and Zhang, Weidong},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {6546-6555},
  doi       = {10.1109/ICCV48922.2021.00648},
  url       = {https://mlanthology.org/iccv/2021/shao2021iccv-spatchgan/}
}