LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation

Abstract

Federated learning (FL) is greatly challenged by the communication bottleneck and computation limitation on clients. Existing methods based on quantization for FL cannot simultaneously reduce the uplink and downlink communication cost and mitigate the computation burden on clients. To address this problem, in this paper, we propose the first low-bit integerized federated learning (LBI-FL) framework that quantizes the weights, activations, and gradients to lower than INT8 precision to evidently reduce the communication and computational costs. Specifically, we achieve dynamical temporal bit-width allocation for weights, activations, and gradients along the training trajectory via reinforcement learning. An agent is trained to determine bit-width allocation by comprehensively considering the states like current bit-width, training stage, and quantization loss as the state. The agent efficiently trained on small-scale datasets can be well generalized to train varying network architectures on non-independent and identically distributed datasets. Furthermore, we demonstrated in theory that federated learning with gradient quantization achieves an equivalent convergence rate to FedAvg. The proposed LBI-FL can reduce the communication costs by 8 times compared to full-precision FL. Extensive experiments show that the proposed LBI-FL achieves a reduction of more than 50% BitOPs per client on average for FL with less than 2% accuracy loss compared to low-bit training with INT8 precision.

Cite

Text

Ding et al. "LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ding et al. "LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ding2025icml-lbifl/)

BibTeX

@inproceedings{ding2025icml-lbifl,
  title     = {{LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation}},
  author    = {Ding, Li and Zhang, Hao and Dai, Wenrui and Li, Chenglin and Lu, Weijia and Yang, Zhifei and Zhang, Xiaodong and Ma, Xiaofeng and Zou, Junni and Xiong, Hongkai},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {13885-13899},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ding2025icml-lbifl/}
}