Multi-Mapping Image-to-Image Translation via Learning Disentanglement

Abstract

Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation. However, the existing methods only consider one of the two perspectives, which makes them unable to solve each other's problem. To address this issue, we propose a novel unified model, which bridges these two objectives. First, we disentangle the input images into the latent representations by an encoder-decoder architecture with a conditional adversarial training in the feature space. Then, we encourage the generator to learn multi-mappings by a random cross-domain translation. As a result, we can manipulate different parts of the latent representations to perform multi-modal and multi-domain translations simultaneously. Experiments demonstrate that our method outperforms state-of-the-art methods.

Cite

Text

Yu et al. "Multi-Mapping Image-to-Image Translation via Learning Disentanglement." Neural Information Processing Systems, 2019.

Markdown

[Yu et al. "Multi-Mapping Image-to-Image Translation via Learning Disentanglement." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/yu2019neurips-multimapping/)

BibTeX

@inproceedings{yu2019neurips-multimapping,
  title     = {{Multi-Mapping Image-to-Image Translation via Learning Disentanglement}},
  author    = {Yu, Xiaoming and Chen, Yuanqi and Liu, Shan and Li, Thomas and Li, Ge},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {2994-3004},
  url       = {https://mlanthology.org/neurips/2019/yu2019neurips-multimapping/}
}