Reducing Communication for Split Learning by Randomized Top-K Sparsification

Abstract

Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.

Cite

Text

Zheng et al. "Reducing Communication for Split Learning by Randomized Top-K Sparsification." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/519

Markdown

[Zheng et al. "Reducing Communication for Split Learning by Randomized Top-K Sparsification." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/zheng2023ijcai-reducing/) doi:10.24963/IJCAI.2023/519

BibTeX

@inproceedings{zheng2023ijcai-reducing,
  title     = {{Reducing Communication for Split Learning by Randomized Top-K Sparsification}},
  author    = {Zheng, Fei and Chen, Chaochao and Lyu, Lingjuan and Yao, Binhui},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {4665-4673},
  doi       = {10.24963/IJCAI.2023/519},
  url       = {https://mlanthology.org/ijcai/2023/zheng2023ijcai-reducing/}
}