A Spitting Image: Modular Superpixel Tokenization in Vision Transformers

Abstract

Vision Transformer (ViT) architectures traditionally employ a grid-based approach to tokenization independent of the semantic content of an image. We propose a modular superpixel tokenization strategy which decouples tokenization and feature extraction; a shift from contemporary approaches where these are treated as an undifferentiated whole. Using on-line content-aware tokenization and scale- and shape-invariant positional embeddings, we perform experiments and ablations that contrast our approach with patch-based tokenization and randomized partitions as baselines. We show that our method significantly improves the faithfulness of attributions, gives pixel-level granularity on zero-shot unsupervised dense prediction tasks, while maintaining predictive performance in classification tasks. Our approach provides a modular tokenization framework commensurable with standard architectures, extending the space of ViTs to a larger class of semantically-rich models.

Cite

Text

Aasan et al. "A Spitting Image: Modular Superpixel Tokenization in Vision Transformers." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-93806-1_11

Markdown

[Aasan et al. "A Spitting Image: Modular Superpixel Tokenization in Vision Transformers." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/aasan2024eccvw-spitting/) doi:10.1007/978-3-031-93806-1_11

BibTeX

@inproceedings{aasan2024eccvw-spitting,
  title     = {{A Spitting Image: Modular Superpixel Tokenization in Vision Transformers}},
  author    = {Aasan, Marius and Kolbjørnsen, Odd and Solberg, Anne H. Schistad and Rivera, Adín Ramírez},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {124-142},
  doi       = {10.1007/978-3-031-93806-1_11},
  url       = {https://mlanthology.org/eccvw/2024/aasan2024eccvw-spitting/}
}