ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation
Abstract
While humans effortlessly draw visual objects and shapes by adaptively allocating attention based on their complexity, existing multimodal large language models (MLLMs) remain constrained by rigid token representations. Bridging this gap, we propose ALTo, an adaptive length tokenizer for autoregressive mask generation. To achieve this, a novel token length predictor is designed, along with a length regularization term and a differentiable token chunking strategy. We further build ALToLLM that seamlessly integrates ALTo into MLLM. Preferences on the trade-offs between mask quality and efficiency is implemented by group relative policy optimization (GRPO). Experiments demonstrate that ALToLLM achieves state-of-the-art performance with adaptive token cost on popular segmentation benchmarks. Code and models will be released.
Cite
Text
Wang et al. "ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation." Advances in Neural Information Processing Systems, 2025.Markdown
[Wang et al. "ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/wang2025neurips-alto/)BibTeX
@inproceedings{wang2025neurips-alto,
title = {{ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation}},
author = {Wang, Lingfeng and Lin, Hualing and Chen, Senda and Wang, Tao and Cheng, Changxu and Zhong, Yangyang and Zheng, Dong and Zhao, Wuyue},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/wang2025neurips-alto/}
}