Ensemble Distillation for Robust Model Fusion in Federated Learning
Abstract
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model while keeping the training data decentralized. In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side. However, directly averaging model parameters is only possible if all models have the same structure and size, which could be a restrictive constraint in many scenarios.
Cite
Text
Lin et al. "Ensemble Distillation for Robust Model Fusion in Federated Learning." Neural Information Processing Systems, 2020.Markdown
[Lin et al. "Ensemble Distillation for Robust Model Fusion in Federated Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/lin2020neurips-ensemble/)BibTeX
@inproceedings{lin2020neurips-ensemble,
title = {{Ensemble Distillation for Robust Model Fusion in Federated Learning}},
author = {Lin, Tao and Kong, Lingjing and Stich, Sebastian U and Jaggi, Martin},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/lin2020neurips-ensemble/}
}