CR-GAN: Learning Complete Representations for Multi-View Generation
Abstract
Generating multi-view images from a single-view input is an important yet challenging problem. It has broad applications in vision, graphics, and robotics. Our study indicates that the widely-used generative adversarial network (GAN) may learn ?incomplete? representations due to the single-pathway framework: an encoder-decoder network followed by a discriminator network.We propose CR-GAN to address this problem. In addition to the single reconstruction path, we introduce a generation sideway to maintain the completeness of the learned embedding space. The two learning paths collaborate and compete in a parameter-sharing manner, yielding largely improved generality to ?unseen? dataset. More importantly, the two-pathway framework makes it possible to combine both labeled and unlabeled data for self-supervised learning, which further enriches the embedding space for realistic generations. We evaluate our approach on a wide range of datasets. The results prove that CR-GAN significantly outperforms state-of-the-art methods, especially when generating from ?unseen? inputs in wild conditions.
Cite
Text
Tian et al. "CR-GAN: Learning Complete Representations for Multi-View Generation." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/131Markdown
[Tian et al. "CR-GAN: Learning Complete Representations for Multi-View Generation." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/tian2018ijcai-cr/) doi:10.24963/IJCAI.2018/131BibTeX
@inproceedings{tian2018ijcai-cr,
title = {{CR-GAN: Learning Complete Representations for Multi-View Generation}},
author = {Tian, Yu and Peng, Xi and Zhao, Long and Zhang, Shaoting and Metaxas, Dimitris N.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {942-948},
doi = {10.24963/IJCAI.2018/131},
url = {https://mlanthology.org/ijcai/2018/tian2018ijcai-cr/}
}