REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers

Abstract

In this paper we tackle a fundamental question: "Can we train latent diffusion models together with the variational auto-encoder (VAE) tokenizer in an end-to-end manner?" Traditional deep-learning wisdom dictates that end-to-end training is often preferable when possible. However, for latent diffusion transformers, it is observed that end-to-end training both VAE and diffusion-model using standard diffusion-loss is ineffective, even causing a degradation in final performance. We show that while diffusion loss is ineffective, end-to-end training can be unlocked through the representation-alignment (REPA) loss -- allowing both VAE and diffusion model to be jointly tuned during the training process. Despite its simplicity, the proposed training recipe (REPA-E) shows remarkable performance; speeding up diffusion model training by over 17x and 45x over REPA and vanilla training recipes, respectively. Interestingly, we observe that end-to-end tuning with REPA-E also improves the VAE itself; leading to improved latent space structure and downstream generation performance. In terms of final performance, our approach sets a new state-of-the-art; achieving FID of 1.26 and 1.83 with and without classifier-free guidance on ImageNet 256x256. Code is available at https://end2end-diffusion.github.io.

Cite

Text

Leng et al. "REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers." International Conference on Computer Vision, 2025.

Markdown

[Leng et al. "REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/leng2025iccv-repae/)

BibTeX

@inproceedings{leng2025iccv-repae,
  title     = {{REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers}},
  author    = {Leng, Xingjian and Singh, Jaskirat and Hou, Yunzhong and Xing, Zhenchang and Xie, Saining and Zheng, Liang},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {18262-18272},
  url       = {https://mlanthology.org/iccv/2025/leng2025iccv-repae/}
}