SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer
Abstract
Efficient image tokenization with high compression ratios remains a critical challenge for training generative models.We present SoftVQ-VAE, a continuous image tokenizer that leverages soft categorical posteriors to aggregate multiple codewords into each latent token, substantially increasing the representation capacity of the latent space. When applied to Transformer-based architectures, our approach compresses 256x256 and 512x512 images using only 32 or 64 1-dimensional tokens.Not only does SoftVQ-VAE show consistent and high-quality reconstruction, more importantly, it also achieves state-of-the-art and significantly faster image generation results across different denoising-based generative models. Remarkably, SoftVQ-VAE improves inference throughput by up to 18x for generating 256x256 images and 55x for 512x512 images while achieving competitive FID scores of 1.78 and 2.21 for SiT-XL.It also improves the training efficiency of the generative models by reducing the number of training iterations by 2.3x while maintaining comparable performance. With its fully-differentiable design and semantic-rich latent space, our experiment demonstrates that SoftVQ-VQE achieves efficient tokenization without compromising generation quality, paving the way for more efficient generative models.Code and model will be released.
Cite
Text
Chen et al. "SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02641Markdown
[Chen et al. "SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/chen2025cvpr-softvqvae/) doi:10.1109/CVPR52734.2025.02641BibTeX
@inproceedings{chen2025cvpr-softvqvae,
title = {{SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer}},
author = {Chen, Hao and Wang, Ze and Li, Xiang and Sun, Ximeng and Chen, Fangyi and Liu, Jiang and Wang, Jindong and Raj, Bhiksha and Liu, Zicheng and Barsoum, Emad},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {28358-28370},
doi = {10.1109/CVPR52734.2025.02641},
url = {https://mlanthology.org/cvpr/2025/chen2025cvpr-softvqvae/}
}