Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99%

Abstract

In the realm of image quantization exemplified by VQGAN, the process encodes images into discrete tokens drawn from a codebook with a predefined size. Recent advancements, particularly with LLAMA 3, reveal that enlarging the codebook significantly enhances model performance. However, VQGAN and its derivatives, such as VQGAN-FC (Factorized Codes) and VQGAN-EMA, continue to grapple with challenges related to expanding the codebook size and enhancing codebook utilization. For instance, VQGAN-FC is restricted to learning a codebook with a maximum size of 16,384, maintaining a typically low utilization rate of less than 12% on ImageNet. In this work, we propose a novel image quantization model named VQGAN-LC (Large Codebook), which extends the codebook size to 100,000, achieving an utilization rate exceeding 99%. Unlike previous methods that optimize each codebook entry, our approach begins with a codebook initialized with 100,000 features extracted by a pre-trained vision encoder. Optimization then focuses on training a projector that aligns the entire codebook with the feature distributions of the encoder in VQGAN-LC. We demonstrate the superior performance of our model over its counterparts across a variety of tasks, including image reconstruction, image classification, auto-regressive image generation using GPT, and image creation with diffusion- and flow-based generative models.

Cite

Text

Zhu et al. "Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99%." Neural Information Processing Systems, 2024. doi:10.52202/079017-0401

Markdown

[Zhu et al. "Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99%." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhu2024neurips-scaling/) doi:10.52202/079017-0401

BibTeX

@inproceedings{zhu2024neurips-scaling,
  title     = {{Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99%}},
  author    = {Zhu, Lei and Wei, Fangyun and Lu, Yanye and Chen, Dong},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0401},
  url       = {https://mlanthology.org/neurips/2024/zhu2024neurips-scaling/}
}