Movement Pruning: Adaptive Sparsity by Fine-Tuning

Abstract

Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.

Cite

Text

Sanh et al. "Movement Pruning: Adaptive Sparsity by Fine-Tuning." Neural Information Processing Systems, 2020.

Markdown

[Sanh et al. "Movement Pruning: Adaptive Sparsity by Fine-Tuning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/sanh2020neurips-movement/)

BibTeX

@inproceedings{sanh2020neurips-movement,
  title     = {{Movement Pruning: Adaptive Sparsity by Fine-Tuning}},
  author    = {Sanh, Victor and Wolf, Thomas and Rush, Alexander},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/sanh2020neurips-movement/}
}