Adapting Vision Transformers to Ultra-High Resolution Semantic Segmentation with Relay Tokens
Abstract
Current approaches for segmenting ultra-high-resolution images either slide a window, thereby discarding global context, or downsample and lose fine detail. We propose a simple yet effective method that brings explicit multi-scale reasoning to vision transformers, simultaneously preserving local details and global awareness. Concretely, we process each image in parallel at a local scale (high-resolution, small crops) and a global scale (low-resolution, large crops), and aggregate and propagate features between the two branches with a small set of learnable relay tokens. The design plugs directly into standard transformer backbones (e.g. ViT and Swin) and adds fewer than 2 % parameters. Extensive experiments on three ultra-high-resolution segmentation benchmarks, Archaeoscape, URUR, and Gleason, and on the conventional Cityscapes dataset show consistent gains, with up to 15 % relative mIoU improvement. Code and pretrained models are available at https://archaeoscape.ai/work/relay-tokens/.
Cite
Text
Perron et al. "Adapting Vision Transformers to Ultra-High Resolution Semantic Segmentation with Relay Tokens." Transactions on Machine Learning Research, 2026.Markdown
[Perron et al. "Adapting Vision Transformers to Ultra-High Resolution Semantic Segmentation with Relay Tokens." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/perron2026tmlr-adapting/)BibTeX
@article{perron2026tmlr-adapting,
title = {{Adapting Vision Transformers to Ultra-High Resolution Semantic Segmentation with Relay Tokens}},
author = {Perron, Yohann and Sydorov, Vladyslav and Pottier, Christophe and Landrieu, Loic},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/perron2026tmlr-adapting/}
}