Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.

Abstract

In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road to gigapixel image generation at no extra cost. Our cascading method uses the image generated at the lowest resolution as baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands so that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that this method not only achieves higher quality and diversity compared to existing techniques but also reduces sampling time and ablation artifacts.

Cite

Text

Tragakis et al. "Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.." Neural Information Processing Systems, 2024. doi:10.52202/079017-1305

Markdown

[Tragakis et al. "Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/tragakis2024neurips-one/) doi:10.52202/079017-1305

BibTeX

@inproceedings{tragakis2024neurips-one,
  title     = {{Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.}},
  author    = {Tragakis, Athanasios and Aversa, Marco and Kaul, Chaitanya and Murray-Smith, Roderick and Faccio, Daniele},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1305},
  url       = {https://mlanthology.org/neurips/2024/tragakis2024neurips-one/}
}