A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract)

Abstract

In federated learning, frequent parameter transmission between clients and the server results in significant communication overhead, particularly due to redundancy within the parameters. To address this issue, we propose a Complementary Pruning for Device-to-Device Communication (FedCPD) method. This approach effectively reduces the amount of transmitted parameters by applying complementary pruning techniques on both the server and clients. Additionally, we decrease the communication frequency between clients and the server by employing chain updates among clients (i.e., device-to-device communication). We conducted experiments on the MNIST, FMNIST, CIFAR-10, and CIFAR-100 datasets, and the results demonstrate that our method significantly reduces communication costs while improving model accuracy.

Cite

Text

Xu et al. "A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35318

Markdown

[Xu et al. "A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/xu2025aaai-high/) doi:10.1609/AAAI.V39I28.35318

BibTeX

@inproceedings{xu2025aaai-high,
  title     = {{A High-Efficiency Federated Learning Method Using Complementary Pruning for D2D Communication (Student Abstract)}},
  author    = {Xu, Xiaoqing and Pei, Jiaming and Wang, Lukun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29541-29542},
  doi       = {10.1609/AAAI.V39I28.35318},
  url       = {https://mlanthology.org/aaai/2025/xu2025aaai-high/}
}