LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition
Abstract
The Vision Transformer (ViT) excels in accuracy when handling high-resolution images, yet it confronts the challenge of significant spatial redundancy, leading to increased computational and memory requirements. To address this, we present the Localization and Focus Vision Transformer (LF-ViT). This model operates by strategically curtailing computational demands without impinging on performance. In the Localization phase, a reduced-resolution image is processed; if a definitive prediction remains elusive, our pioneering Neighborhood Global Class Attention (NGCA) mechanism is triggered, effectively identifying and spotlighting class-discriminative regions based on initial findings. Subsequently, in the Focus phase, this designated region is used from the original image to enhance recognition. Uniquely, LF-ViT employs consistent parameters across both phases, ensuring seamless end-to-end optimization. Our empirical tests affirm LF-ViT's prowess: it remarkably decreases Deit-S's FLOPs by 63% and concurrently amplifies throughput twofold. Code of this project is at https://github.com/edgeai1/LF-ViT.git.
Cite
Text
Hu et al. "LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I3.28001Markdown
[Hu et al. "LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/hu2024aaai-lf/) doi:10.1609/AAAI.V38I3.28001BibTeX
@inproceedings{hu2024aaai-lf,
title = {{LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition}},
author = {Hu, Youbing and Cheng, Yun and Lu, Anqi and Cao, Zhiqiang and Wei, Dawei and Liu, Jie and Li, Zhijun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {2274-2284},
doi = {10.1609/AAAI.V38I3.28001},
url = {https://mlanthology.org/aaai/2024/hu2024aaai-lf/}
}