Masked Based Unsupervised Content Transfer

Abstract

We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at https://github.com/rmokady/mbu-content-tansfer.

Cite

Text

Mokady et al. "Masked Based Unsupervised Content Transfer." International Conference on Learning Representations, 2020.

Markdown

[Mokady et al. "Masked Based Unsupervised Content Transfer." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/mokady2020iclr-masked/)

BibTeX

@inproceedings{mokady2020iclr-masked,
  title     = {{Masked Based Unsupervised Content Transfer}},
  author    = {Mokady, Ron and Benaim, Sagie and Wolf, Lior and Bermano, Amit},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/mokady2020iclr-masked/}
}