MLP Can Be a Good Transformer Learner

Abstract

Self-attention mechanism is the key of the Transformer but often criticized for its computation demands. Previous token pruning works motivate their methods from the view of computation redundancy but still need to load the full network and require same memory costs. This paper introduces a novel strategy that simplifies vision transformers and reduces computational load through the selective removal of non-essential attention layers guided by entropy considerations. We identify that regarding the attention layer in bottom blocks their subsequent MLP layers i.e. two feed-forward layers can elicit the same entropy quantity. Meanwhile the accompanied MLPs are under-exploited since they exhibit smaller feature entropy compared to those MLPs in the top blocks. Therefore we propose to integrate the uninformative attention layers into their subsequent counterparts by degenerating them into identical mapping yielding only MLP in certain transformer blocks. Experimental results on ImageNet-1k show that the proposed method can remove 40% attention layer of DeiT-B improving throughput and memory bound without performance compromise.

Cite

Text

Lin et al. "MLP Can Be a Good Transformer Learner." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01843

Markdown

[Lin et al. "MLP Can Be a Good Transformer Learner." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/lin2024cvpr-mlp/) doi:10.1109/CVPR52733.2024.01843

BibTeX

@inproceedings{lin2024cvpr-mlp,
  title     = {{MLP Can Be a Good Transformer Learner}},
  author    = {Lin, Sihao and Lyu, Pumeng and Liu, Dongrui and Tang, Tao and Liang, Xiaodan and Song, Andy and Chang, Xiaojun},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {19489-19498},
  doi       = {10.1109/CVPR52733.2024.01843},
  url       = {https://mlanthology.org/cvpr/2024/lin2024cvpr-mlp/}
}