Fine-Tuning Language Models over Slow Networks Using Activation Quantization with Guarantees
Abstract
Communication compression is a crucial technique for modern distributed learning systems to alleviate their communication bottlenecks over slower networks. Despite recent intensive studies of gradient compression for data parallel-style training, compressing the activations for models trained with pipeline parallelism is still an open problem. In this paper, we propose AQ-SGD, a novel activation compression algorithm for communication-efficient pipeline parallelism training over slow networks. Different from previous efforts in activation compression, instead of compressing activation values directly, AQ-SGD compresses the changes of the activations. This allows us to show, to the best of our knowledge for the first time, that one can still achieve $O(1/\sqrt{T})$ convergence rate for non-convex objectives under activation compression, without making assumptions on gradient unbiasedness that do not hold for deep learning models with non-linear activation functions. We then show that AQ-SGD can be optimized and implemented efficiently, without additional end-to-end runtime overhead. We evaluated AQ-SGD to fine-tune language models with up to 1.5 billion parameters, compressing activation to 2-4 bits. AQ-SGD provides up to $4.3\times$ end-to-end speed-up in slower networks, without sacrificing model quality. Moreover, we also show that AQ-SGD can be combined with state-of-the-art gradient compression algorithms to enable end-to-end communication compression: All communications between machines, including model gradients, forward activations, and backward gradients are compressed into lower precision. This provides up to $4.9\times$ end-to-end speed-up, without sacrificing model quality.
Cite
Text
Wang et al. "Fine-Tuning Language Models over Slow Networks Using Activation Quantization with Guarantees." Neural Information Processing Systems, 2022.Markdown
[Wang et al. "Fine-Tuning Language Models over Slow Networks Using Activation Quantization with Guarantees." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wang2022neurips-finetuning/)BibTeX
@inproceedings{wang2022neurips-finetuning,
title = {{Fine-Tuning Language Models over Slow Networks Using Activation Quantization with Guarantees}},
author = {Wang, Jue and Yuan, Binhang and Rimanic, Luka and He, Yongjun and Dao, Tri and Chen, Beidi and Ré, Christopher and Zhang, Ce},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/wang2022neurips-finetuning/}
}