Style-Guided and Disentangled Representation for Robust Image-to-Image Translation
Abstract
Recently, various image-to-image translation (I2I) methods have improved mode diversity and visual quality in terms of neural networks or regularization terms. However, conventional I2I methods relies on a static decision boundary and the encoded representations in those methods are entangled with each other, so they often face with ‘mode collapse’ phenomenon. To mitigate mode collapse, 1) we design a so-called style-guided discriminator that guides an input image to the target image style based on the strategy of flexible decision boundary. 2) Also, we make the encoded representations include independent domain attributes. Based on two ideas, this paper proposes Style-Guided and Disentangled Representation for Robust Image-to-Image Translation (SRIT). SRIT showed outstanding FID by 8%, 22.8%, and 10.1% for CelebA-HQ, AFHQ, and Yosemite datasets, respectively. The translated images of SRIT reflect the styles of target domain successfully. This indicates that SRIT shows better mode diversity than previous works.
Cite
Text
Choi et al. "Style-Guided and Disentangled Representation for Robust Image-to-Image Translation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I1.19924Markdown
[Choi et al. "Style-Guided and Disentangled Representation for Robust Image-to-Image Translation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/choi2022aaai-style/) doi:10.1609/AAAI.V36I1.19924BibTeX
@inproceedings{choi2022aaai-style,
title = {{Style-Guided and Disentangled Representation for Robust Image-to-Image Translation}},
author = {Choi, Jaewoong and Kim, Dae Ha and Song, Byung Cheol},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {463-471},
doi = {10.1609/AAAI.V36I1.19924},
url = {https://mlanthology.org/aaai/2022/choi2022aaai-style/}
}