Training-Free Acceleration of ViTs with Delayed Spatial Merging
Abstract
Token merging has emerged as a new paradigm that can accelerate the inference of Vision Transformers (ViTs) without any retraining or fine-tuning. To push the frontier of training-free acceleration in ViTs, we improve token merging by adding the perspectives of 1) activation outliers and 2) hierarchical representations. Through a careful analysis of the attention behavior in ViTs, we characterize a delayed onset of the convergent attention phenomenon, which makes token merging undesirable in the bottom blocks of ViTs. Moreover, we augment token merging with a hierarchical processing scheme to capture multi-scale redundancy between visual tokens. Combining these two insights, we build a unified inference framework called DSM: Delayed Spatial Merging. We extensively evaluate DSM on various ViT model scales (Tiny to Huge) and tasks (ImageNet-1k and transfer learning), achieving up to 1.8$\times$ FLOP reduction and 1.6$\times$ throughput speedup at a negligible loss while being two orders of magnitude faster than existing methods.
Cite
Text
Heo et al. "Training-Free Acceleration of ViTs with Delayed Spatial Merging." ICML 2024 Workshops: ES-FoMo-II, 2024.Markdown
[Heo et al. "Training-Free Acceleration of ViTs with Delayed Spatial Merging." ICML 2024 Workshops: ES-FoMo-II, 2024.](https://mlanthology.org/icmlw/2024/heo2024icmlw-trainingfree/)BibTeX
@inproceedings{heo2024icmlw-trainingfree,
title = {{Training-Free Acceleration of ViTs with Delayed Spatial Merging}},
author = {Heo, Jung Hwan and Azizi, Seyedarmin and Fayyazi, Arash and Pedram, Massoud},
booktitle = {ICML 2024 Workshops: ES-FoMo-II},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/heo2024icmlw-trainingfree/}
}