Highly Compressed Tokenizer Can Generate Without Training
Abstract
Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called 1D image tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations – such as copying and replacing tokens between latent representations of images – enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer’s latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.
Cite
Text
Beyer et al. "Highly Compressed Tokenizer Can Generate Without Training." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Beyer et al. "Highly Compressed Tokenizer Can Generate Without Training." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/beyer2025icml-highly/)BibTeX
@inproceedings{beyer2025icml-highly,
title = {{Highly Compressed Tokenizer Can Generate Without Training}},
author = {Beyer, Lukas Lao and Li, Tianhong and Chen, Xinlei and Karaman, Sertac and He, Kaiming},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {4096-4114},
volume = {267},
url = {https://mlanthology.org/icml/2025/beyer2025icml-highly/}
}