Self-Supervised Dense Consistency Regularization for Image-to-Image Translation

Abstract

Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). In this paper, we present a simple but effective regularization technique for improving GAN-based image-to-image translation. To generate images with realistic local semantics and structures, we suggest to use an auxiliary self-supervised loss, enforcing point-wise consistency of the overlapped region between a pair of patches cropped from a single real image during training discriminators of GAN. Our experiment shows that the dense consistency regularization improves performance substantially on various image-to-image translation scenarios. It also achieves extra performance gains by using jointly with recent instance-level regularization methods. Furthermore, we verify that the proposed model captures domain-specific characteristics more effectively with only small fraction of training data.

Cite

Text

Ko et al. "Self-Supervised Dense Consistency Regularization for Image-to-Image Translation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01776

Markdown

[Ko et al. "Self-Supervised Dense Consistency Regularization for Image-to-Image Translation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/ko2022cvpr-selfsupervised/) doi:10.1109/CVPR52688.2022.01776

BibTeX

@inproceedings{ko2022cvpr-selfsupervised,
  title     = {{Self-Supervised Dense Consistency Regularization for Image-to-Image Translation}},
  author    = {Ko, Minsu and Cha, Eunju and Suh, Sungjoo and Lee, Huijin and Han, Jae-Joon and Shin, Jinwoo and Han, Bohyung},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {18301-18310},
  doi       = {10.1109/CVPR52688.2022.01776},
  url       = {https://mlanthology.org/cvpr/2022/ko2022cvpr-selfsupervised/}
}