S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-Training
Abstract
Training deep neural networks (DNNs) is costly. Fortunately, Nvidia Ampere and Hopper GPUs can accelerate matrix multiplications twice as fast as a dense equivalent by implementing 2:4 sparsity. However, previous STE-based 2:4 pre-training methods (\eg~STE with hard-thresholding, SR-STE) suffer from optimization difficulties because of discontinuous pruning function.In this study, we comprehensively analyse the bottleneck of traditional N:M sparse training and recognize three drawbacks with discontinuity: incorrect descending direction, inability to predict the amount of descent and sparse mask oscillation. In the light of this statement, we propose S-STE, a simple yet powerful 2:4 training method that contains two parts: to continuously project weights to be 2:4 sparse, and to rescale sparse weights with a per-tensor fixed scaling factor. Besides, we adopt minimum-variance unbiased estimation for activation gradient and FP8 quantization for whole process. Results show that our method surpass previous 2:4 pre-training recipes and is comparable even with full parameter models.
Cite
Text
Hu et al. "S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-Training." Neural Information Processing Systems, 2024. doi:10.52202/079017-1063Markdown
[Hu et al. "S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-Training." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/hu2024neurips-sste/) doi:10.52202/079017-1063BibTeX
@inproceedings{hu2024neurips-sste,
title = {{S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-Training}},
author = {Hu, Yuezhou and Zhu, Jun and Chen, Jianfei},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1063},
url = {https://mlanthology.org/neurips/2024/hu2024neurips-sste/}
}