LayoutDM: Transformer-Based Diffusion Model for Layout Generation

Abstract

Automatic layout generation that can synthesize high-quality layouts is an important tool for graphic design in many applications. Though existing methods based on generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) have progressed, they still leave much room for improving the quality and diversity of the results. Inspired by the recent success of diffusion models in generating high-quality images, this paper explores their potential for conditional layout generation and proposes Transformer-based Layout Diffusion Model (LayoutDM) by instantiating the conditional denoising diffusion probabilistic model (DDPM) with a purely transformer-based architecture. Instead of using convolutional neural networks, a transformer-based conditional Layout Denoiser is proposed to learn the reverse diffusion process to generate samples from noised layout data. Benefitting from both transformer and DDPM, our LayoutDM is of desired properties such as high-quality generation, strong sample diversity, faithful distribution coverage, and stationary training in comparison to GANs and VAEs. Quantitative and qualitative experimental results show that our method outperforms state-of-the-art generative models in terms of quality and diversity.

Cite

Text

Chai et al. "LayoutDM: Transformer-Based Diffusion Model for Layout Generation." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01760

Markdown

[Chai et al. "LayoutDM: Transformer-Based Diffusion Model for Layout Generation." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/chai2023cvpr-layoutdm/) doi:10.1109/CVPR52729.2023.01760

BibTeX

@inproceedings{chai2023cvpr-layoutdm,
  title     = {{LayoutDM: Transformer-Based Diffusion Model for Layout Generation}},
  author    = {Chai, Shang and Zhuang, Liansheng and Yan, Fengying},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {18349-18358},
  doi       = {10.1109/CVPR52729.2023.01760},
  url       = {https://mlanthology.org/cvpr/2023/chai2023cvpr-layoutdm/}
}