CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers
Abstract
Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and fine-tune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
Cite
Text
Ding et al. "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers." Neural Information Processing Systems, 2022.Markdown
[Ding et al. "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ding2022neurips-cogview2/)BibTeX
@inproceedings{ding2022neurips-cogview2,
title = {{CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers}},
author = {Ding, Ming and Zheng, Wendi and Hong, Wenyi and Tang, Jie},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ding2022neurips-cogview2/}
}