CogView: Mastering Text-to-Image Generation via Transformers

Abstract

Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.

Cite

Text

Ding et al. "CogView: Mastering Text-to-Image Generation via Transformers." Neural Information Processing Systems, 2021.

Markdown

[Ding et al. "CogView: Mastering Text-to-Image Generation via Transformers." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/ding2021neurips-cogview/)

BibTeX

@inproceedings{ding2021neurips-cogview,
  title     = {{CogView: Mastering Text-to-Image Generation via Transformers}},
  author    = {Ding, Ming and Yang, Zhuoyi and Hong, Wenyi and Zheng, Wendi and Zhou, Chang and Yin, Da and Lin, Junyang and Zou, Xu and Shao, Zhou and Yang, Hongxia and Tang, Jie},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/ding2021neurips-cogview/}
}