Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks
Abstract
Pruning plays an essential role in deploying deep neural nets (DNNs) to the hardware of limited memory or computation. However, current high-quality iterative pruning can create a terrible carbon footprint when compressing a large DNN for a wide variety of devices and tasks. Can we reuse the pruning results on previous tasks to accelerate the pruning for a new task? Can we find a better initialization for a new task? We study this ``nearest neighbors meta-pruning'' problem by first investigating different choices of pre-trained models for pruning under limited iterations. Our empirical study reveals several advantages of the self-supervision pre-trained model when pruned for multiple tasks. We further study the overlap of pruned models for similar tasks and how the overlap changes for different layers. Inspired by these discoveries, we develop a simple but strong baseline ``Meta-Vote Pruning (MVP)'' that significantly reduces the pruning iterations for a new task by initializing a sub-network from the pruned models of tasks similar to it. In experiments, we demonstrate the advantages of MVP through extensive empirical studies and comparisons with popular pruning methods.
Cite
Text
Zhao et al. "Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks." ICML 2022 Workshops: Pre-Training, 2022.Markdown
[Zhao et al. "Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/zhao2022icmlw-vote/)BibTeX
@inproceedings{zhao2022icmlw-vote,
title = {{Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks}},
author = {Zhao, Haiyan and Zhou, Tianyi and Long, Guodong and Jiang, Jing and Zhang, Chengqi},
booktitle = {ICML 2022 Workshops: Pre-Training},
year = {2022},
url = {https://mlanthology.org/icmlw/2022/zhao2022icmlw-vote/}
}