Data Level Lottery Ticket Hypothesis for Vision Transformers
Abstract
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within a dense neural network and a proper random initialization method, called the winning ticket, such that it can be trained from scratch to almost as good as the dense counterpart. Meanwhile, the research of LTH in vision transformers (ViTs) is scarcely evaluated. In this paper, we first show that the conventional winning ticket is hard to find at weight level of ViTs by existing methods. Then, we generalize the LTH for ViTs to input data consisting of image patches inspired by the input dependence of ViTs. That is, there exists a subset of input image patches such that a ViT can be trained from scratch by using only this subset of patches and achieve similar accuracy to the ViTs trained by using all image patches. We call this subset of input patches the winning tickets, which represent a significant amount of information in the input data. We use a ticket selector to generate the winning tickets based on the informativeness of patches for various types of ViT, including DeiT, LV-ViT, and Swin Transformers. The experiments show that there is a clear difference between the performance of models trained with winning tickets and randomly selected subsets, which verifies our proposed theory. We elaborate the analogical similarity between our proposed Data-LTH-ViTs and the conventional LTH for further verifying the integrity of our theory. The Source codes are available at https://github.com/shawnricecake/vit-lottery-ticket-input.
Cite
Text
Shen et al. "Data Level Lottery Ticket Hypothesis for Vision Transformers." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/153Markdown
[Shen et al. "Data Level Lottery Ticket Hypothesis for Vision Transformers." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/shen2023ijcai-data/) doi:10.24963/IJCAI.2023/153BibTeX
@inproceedings{shen2023ijcai-data,
title = {{Data Level Lottery Ticket Hypothesis for Vision Transformers}},
author = {Shen, Xuan and Kong, Zhenglun and Qin, Minghai and Dong, Peiyan and Yuan, Geng and Meng, Xin and Tang, Hao and Ma, Xiaolong and Wang, Yanzhi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {1378-1386},
doi = {10.24963/IJCAI.2023/153},
url = {https://mlanthology.org/ijcai/2023/shen2023ijcai-data/}
}