Unpaired Image-to-Image Translation Using Adversarial Consistency Loss

Abstract

Unpaired image-to-image translation is a class of vision problems whose goal is to find the mapping between different image domains using unpaired training data. Cycle-consistency loss is a widely used constraint for such problems. However, due to the strict pixel-level constraint, it cannot perform geometric changes, remove large objects, or ignore irrelevant texture. In this paper, we propose a novel adversarial-consistency loss for image-to-image translation. This loss does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistency loss noted above. Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.

Cite

Text

Zhao et al. "Unpaired Image-to-Image Translation Using Adversarial Consistency Loss." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58545-7_46

Markdown

[Zhao et al. "Unpaired Image-to-Image Translation Using Adversarial Consistency Loss." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zhao2020eccv-unpaired/) doi:10.1007/978-3-030-58545-7_46

BibTeX

@inproceedings{zhao2020eccv-unpaired,
  title     = {{Unpaired Image-to-Image Translation Using Adversarial Consistency Loss}},
  author    = {Zhao, Yihao and Wu, Ruihai and Dong, Hao},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58545-7_46},
  url       = {https://mlanthology.org/eccv/2020/zhao2020eccv-unpaired/}
}