Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Abstract

Vision transformers (ViTs) have recently obtained success in many applications, but their intensive computation and heavy memory usage at both training and inference time limit their generalization. Previous compression algorithms usually start from the pre-trained dense models and only focus on efficient inference, while time-consuming training is still unavoidable. In contrast, this paper points out that the million-scale training data is redundant, which is the fundamental reason for the tedious training. To address the issue, this paper aims to introduce sparsity into data and proposes an end-to-end efficient training framework from three sparse perspectives, dubbed Tri-Level E-ViT. Specifically, we leverage a hierarchical data redundancy reduction scheme, by exploring the sparsity under three levels: number of training examples in the dataset, number of patches (tokens) in each example, and number of connections between tokens that lie in attention weights. With extensive experiments, we demonstrate that our proposed technique can noticeably accelerate training for various ViT architectures while maintaining accuracy. Remarkably, under certain ratios, we are able to improve the ViT accuracy rather than compromising it. For example, we can achieve 15.2% speedup with 72.6% (+0.4) Top-1 accuracy on Deit-T, and 15.7% speedup with 79.9% (+0.1) Top-1 accuracy on Deit-S. This proves the existence of data redundancy in ViT. Our code
is released at https://github.com/ZLKong/Tri-Level-ViT

Cite

Text

Kong et al. "Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I7.26008

Markdown

[Kong et al. "Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/kong2023aaai-peeling/) doi:10.1609/AAAI.V37I7.26008

BibTeX

@inproceedings{kong2023aaai-peeling,
  title     = {{Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training}},
  author    = {Kong, Zhenglun and Ma, Haoyu and Yuan, Geng and Sun, Mengshu and Xie, Yanyue and Dong, Peiyan and Meng, Xin and Shen, Xuan and Tang, Hao and Qin, Minghai and Chen, Tianlong and Ma, Xiaolong and Xie, Xiaohui and Wang, Zhangyang and Wang, Yanzhi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {8360-8368},
  doi       = {10.1609/AAAI.V37I7.26008},
  url       = {https://mlanthology.org/aaai/2023/kong2023aaai-peeling/}
}