Improving Visual Quality of Image Synthesis by a Token-Based Generator with Transformers

Abstract

We present a new perspective of achieving image synthesis by viewing this task as a visual token generation problem. Different from existing paradigms that directly synthesize a full image from a single input (e.g., a latent code), the new formulation enables a flexible local manipulation for different image regions, which makes it possible to learn content-aware and fine-grained style control for image synthesis. Specifically, it takes as input a sequence of latent tokens to predict the visual tokens for synthesizing an image. Under this perspective, we propose a token-based generator (i.e., TokenGAN). Particularly, the TokenGAN inputs two semantically different visual tokens, i.e., the learned constant content tokens and the style tokens from the latent space. Given a sequence of style tokens, the TokenGAN is able to control the image synthesis by assigning the styles to the content tokens by attention mechanism with a Transformer. We conduct extensive experiments and show that the proposed TokenGAN has achieved state-of-the-art results on several widely-used image synthesis benchmarks, including FFHQ and LSUN CHURCH with different resolutions. In particular, the generator is able to synthesize high-fidelity images with (1024x1024) size, dispensing with convolutions entirely.

Cite

Text

Zeng et al. "Improving Visual Quality of Image Synthesis by a Token-Based Generator with Transformers." Neural Information Processing Systems, 2021.

Markdown

[Zeng et al. "Improving Visual Quality of Image Synthesis by a Token-Based Generator with Transformers." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zeng2021neurips-improving/)

BibTeX

@inproceedings{zeng2021neurips-improving,
  title     = {{Improving Visual Quality of Image Synthesis by a Token-Based Generator with Transformers}},
  author    = {Zeng, Yanhong and Yang, Huan and Chao, Hongyang and Wang, Jianbo and Fu, Jianlong},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/zeng2021neurips-improving/}
}