Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers

Abstract

We present the Hourglass Diffusion Transformer (HDiT), an image-generative model that exhibits linear scaling with pixel count, supporting training at high resolution (e.g. $1024 \times 1024$) directly in pixel-space. Building on the Transformer architecture, which is known to scale to billions of parameters, it bridges the gap between the efficiency of convolutional U-Nets and the scalability of Transformers. HDiT trains successfully without typical high-resolution training techniques such as multiscale architectures, latent autoencoders or self-conditioning. We demonstrate that HDiT performs competitively with existing models on ImageNet $256^2$, and sets a new state-of-the-art for diffusion models on FFHQ-$1024^2$. Code is available at https://github.com/crowsonkb/k-diffusion.

Cite

Text

Crowson et al. "Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers." International Conference on Machine Learning, 2024.

Markdown

[Crowson et al. "Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/crowson2024icml-scalable/)

BibTeX

@inproceedings{crowson2024icml-scalable,
  title     = {{Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers}},
  author    = {Crowson, Katherine and Baumann, Stefan Andreas and Birch, Alex and Abraham, Tanishq Mathew and Kaplan, Daniel Z and Shippole, Enrico},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {9550-9575},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/crowson2024icml-scalable/}
}