Zero-Pair Image to Image Translation Using Domain Conditional Normalization
Abstract
In this paper, we propose an approach based on domain conditional normalization (DCN) for zero-pair image-to-image translation, i.e., translating between two domains which have no paired training data available but each have paired training data with a third domain. We employ a single generator which has an encoder-decoder structure and analyze different implementations of domain conditional normalization to obtain the desired target domain output. The validation benchmark uses RGB-depth pairs and RGB-semantic pairs for training and compares performance for the depth-semantic translation task. The proposed approaches improve in qualitative and quantitative terms over the compared methods, while using much fewer parameters.
Cite
Text
Shukla et al. "Zero-Pair Image to Image Translation Using Domain Conditional Normalization." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Shukla et al. "Zero-Pair Image to Image Translation Using Domain Conditional Normalization." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/shukla2021wacv-zeropair/)BibTeX
@inproceedings{shukla2021wacv-zeropair,
title = {{Zero-Pair Image to Image Translation Using Domain Conditional Normalization}},
author = {Shukla, Samarth and Romero, Andres and Van Gool, Luc and Timofte, Radu},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {3512-3519},
url = {https://mlanthology.org/wacv/2021/shukla2021wacv-zeropair/}
}