Efficient Federated Learning via Clients-to-Server Knowledge Distillation (Student Abstract)
Abstract
To diminish the substantial communication costs incurred by federated learning during the training of the global model and enhance the model update efficiency across both clients and server domains, we have integrated knowledge distillation into the federated learning framework. This integration has led to the development of a novel approach termed ClientsToServerKDFL, which streamlines the distillation process by directly transferring model insights from clients to the server for computational learning without the need for extensive computations across numerous clients. This iterative process ensures model accuracy and curtails communication expenses. Experimental data analysis has validated the efficacy of this algorithm.
Cite
Text
Sun et al. "Efficient Federated Learning via Clients-to-Server Knowledge Distillation (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35304Markdown
[Sun et al. "Efficient Federated Learning via Clients-to-Server Knowledge Distillation (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/sun2025aaai-efficient/) doi:10.1609/AAAI.V39I28.35304BibTeX
@inproceedings{sun2025aaai-efficient,
title = {{Efficient Federated Learning via Clients-to-Server Knowledge Distillation (Student Abstract)}},
author = {Sun, Huifang and Pei, Jiaming and Wang, Lukun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29504-29505},
doi = {10.1609/AAAI.V39I28.35304},
url = {https://mlanthology.org/aaai/2025/sun2025aaai-efficient/}
}