Exploring Patch-Wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks

Abstract

Recently, contrastive learning-based image translation methods have been proposed, which contrasts different spatial locations to enhance the spatial correspondence. However, the methods often ignore the diverse semantic relation within the images. To address this, here we propose a novel semantic relation consistency (SRC) regularization along with the decoupled contrastive learning (DCL), which utilize the diverse semantics by focusing on the heterogeneous semantics between the image patches of a single image. To further improve the performance, we present a hard negative mining by exploiting the semantic relation. We verified our method for three tasks: single-modal and multi-modal image translations, and GAN compression task for image translation. Experimental results confirmed the state-of-art performance of our method in all the three tasks.

Cite

Text

Jung et al. "Exploring Patch-Wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01772

Markdown

[Jung et al. "Exploring Patch-Wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/jung2022cvpr-exploring/) doi:10.1109/CVPR52688.2022.01772

BibTeX

@inproceedings{jung2022cvpr-exploring,
  title     = {{Exploring Patch-Wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks}},
  author    = {Jung, Chanyong and Kwon, Gihyun and Ye, Jong Chul},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {18260-18269},
  doi       = {10.1109/CVPR52688.2022.01772},
  url       = {https://mlanthology.org/cvpr/2022/jung2022cvpr-exploring/}
}