SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning
Abstract
Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its propagation in industrial production difficult. Considering the computation complexity, the internal data pattern of ViTs, and the edge device deployment, we propose a latency-aware soft token pruning framework, SPViT, which can be set up on vanilla Transformers of both flatten and hierarchical structures, such as DeiTs and Swin-Transformers (Swin). More concretely, we design a dynamic attention-based multi-head token selector, which is a lightweight module for adaptive instance-wise token selection. We further introduce a soft pruning technique, which integrates the less informative tokens chosen by the selector module into a package token rather than discarding them completely. SPViT is bound to the trade-off between accuracy and latency requirements of specific edge devices through our proposed latency-aware training strategy. Experiment results show that SPViT significantly reduces the computation cost of ViTs with comparable performance on image classification. Moreover, SPViT can guarantee the identified model meets the latency specifications of mobile devices and FPGA, and even achieve the real-time execution of DeiT-T on mobile devices. For example, SPViT reduces the latency of DeiT-T to 26 ms (26% 41% superior to existing works) on the mobile device with 0.25% 4% higher top-1 accuracy on ImageNet. Our code is released at https://github.com/PeiyanFlying/SPViT
Cite
Text
Kong et al. "SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20083-0_37Markdown
[Kong et al. "SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/kong2022eccv-spvit/) doi:10.1007/978-3-031-20083-0_37BibTeX
@inproceedings{kong2022eccv-spvit,
title = {{SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning}},
author = {Kong, Zhenglun and Dong, Peiyan and Ma, Xiaolong and Meng, Xin and Niu, Wei and Sun, Mengshu and Shen, Xuan and Yuan, Geng and Ren, Bin and Tang, Hao and Qin, Minghai and Wang, Yanzhi},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20083-0_37},
url = {https://mlanthology.org/eccv/2022/kong2022eccv-spvit/}
}