Visual-Word Tokenizer: Beyond Fixed Sets of Tokens in Vision Transformers
Abstract
The cost of deploying vision transformers increasingly represents a barrier to wider industrial adoption. Existing compression techniques require additional end-to-end fine-tuning or incur a significant drawback to energy efficiency, making them ill-suited for online (real-time) inference, where a prediction is made on any new input as it comes in. We introduce the Visual-Word Tokenizer (VWT), a training-free method for reducing energy costs while retaining performance. The VWT groups visual subwords (image patches) that are frequently used into visual words, while infrequent ones remain intact. To do so, intra-image or inter-image statistics are leveraged to identify similar visual concepts for sequence compression. Experimentally, we demonstrate a reduction in energy consumed of up to 47%. Comparative approaches of 8-bit quantization and token merging can lead to significantly increased energy costs (up to 500% or more). Our results indicate that VWTs are well-suited for efficient online inference with a marginal compromise on performance. The experimental code for our paper is also made publicly available.
Cite
Text
Gee et al. "Visual-Word Tokenizer: Beyond Fixed Sets of Tokens in Vision Transformers." Transactions on Machine Learning Research, 2025.Markdown
[Gee et al. "Visual-Word Tokenizer: Beyond Fixed Sets of Tokens in Vision Transformers." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/gee2025tmlr-visualword/)BibTeX
@article{gee2025tmlr-visualword,
title = {{Visual-Word Tokenizer: Beyond Fixed Sets of Tokens in Vision Transformers}},
author = {Gee, Leonidas and Li, Wing Yan and Sharmanska, Viktoriia and Quadrianto, Novi},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/gee2025tmlr-visualword/}
}