CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis
Abstract
Typical methods for text-to-image synthesis seek to design effective generative architecture to model the text-to-image mapping directly. It is fairly arduous due to the cross-modality translation. In this paper we circumvent this problem by focusing on parsing the content of both the input text and the synthesized image thoroughly to model the text-to-image consistency in the semantic level. Particularly, we design a memory structure to parse the textual content by exploring semantic correspondence between each word in the vocabulary to its various visual contexts across relevant images during text encoding. Meanwhile, the synthesized image is parsed to learn its semantics in an object-aware manner. Moreover, we customize a conditional discriminator to model the fine-grained correlations between words and image sub-regions to push for the text-image semantic alignment. Extensive experiments on COCO dataset manifest that our model advances the state-of-the-art performance significantly (from 35.69 to 52.73 in Inception Score).
Cite
Text
Liang et al. "CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58548-8_29Markdown
[Liang et al. "CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/liang2020eccv-cpgan/) doi:10.1007/978-3-030-58548-8_29BibTeX
@inproceedings{liang2020eccv-cpgan,
title = {{CPGAN: Content-Parsing Generative Adversarial Networks for Text-to-Image Synthesis}},
author = {Liang, Jiadong and Pei, Wenjie and Lu, Feng},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58548-8_29},
url = {https://mlanthology.org/eccv/2020/liang2020eccv-cpgan/}
}