TSIT: A Simple and Versatile Framework for Image-to-Image Translation
Abstract
We introduce a simple and versatile framework for image-to-image translation. We unearth the importance of normalization layers, and provide a carefully designed two-stream generative model with newly proposed feature transformations in a coarse-to-fine fashion. This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network, permitting our method to scale to various tasks in both unsupervised and supervised settings. No additional constraints (e.g., cycle consistency) are needed, contributing to a very clean and simple method. Multi-modal image synthesis with arbitrary style control is made possible. A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.
Cite
Text
Jiang et al. "TSIT: A Simple and Versatile Framework for Image-to-Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58580-8_13Markdown
[Jiang et al. "TSIT: A Simple and Versatile Framework for Image-to-Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/jiang2020eccv-tsit/) doi:10.1007/978-3-030-58580-8_13BibTeX
@inproceedings{jiang2020eccv-tsit,
title = {{TSIT: A Simple and Versatile Framework for Image-to-Image Translation}},
author = {Jiang, Liming and Zhang, Changxu and Huang, Mingyang and Liu, Chunxiao and Shi, Jianping and Loy, Chen Change},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58580-8_13},
url = {https://mlanthology.org/eccv/2020/jiang2020eccv-tsit/}
}