Consistent Multimodal Generation via a Unified GAN Framework
Abstract
We investigate how to generate multimodal image outputs, such as RGB, depth, and surface normals, with a single generative model. The challenge is to produce outputs that are realistic, and also consistent with each other. Our solution builds on the StyleGAN3 architecture, with a shared backbone and modality-specific branches in the last layers of the synthesis network, and we propose per-modality fidelity discriminators and a cross-modality consistency discriminator. In experiments on the Stanford2D3D dataset, we demonstrate realistic and consistent generation of RGB, depth, and normal images. We also show a training recipe to easily extend our pretrained model on a new domain, even with a few pairwise data. We further evaluate the use of synthetically generated RGB and depth pairs for training or fine-tuning depth estimators. Code will be available at https://github.com/jessemelpolio/MultimodalGAN.
Cite
Text
Zhu et al. "Consistent Multimodal Generation via a Unified GAN Framework." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Zhu et al. "Consistent Multimodal Generation via a Unified GAN Framework." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/zhu2024wacv-consistent/)BibTeX
@inproceedings{zhu2024wacv-consistent,
title = {{Consistent Multimodal Generation via a Unified GAN Framework}},
author = {Zhu, Zhen and Li, Yijun and Lyu, Weijie and Singh, Krishna Kumar and Shu, Zhixin and Pirk, Sören and Hoiem, Derek},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {5048-5057},
url = {https://mlanthology.org/wacv/2024/zhu2024wacv-consistent/}
}