Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices
Abstract
Federated Learning (FL) is a machine learning paradigm to distributively learn machine learning models from decentralized data that remains on-device. Despite the success of standard Federated optimization methods, such as Federated Averaging (FedAvg) in FL, the energy demands and hardware induced constraints for on-device learning have not been considered sufficiently in the literature. Specifically, an essential demand for on-device learning is to enable trained models to be quantized to various bit-widths based on the energy needs and heterogeneous hardware designs across the federation. In this work, we introduce multiple variants of federated averaging algorithm that train neural networks robust to quantization. Such networks can be quantized to various bit-widths with only limited reduction in full precision model accuracy. We perform extensive experiments on standard FL benchmarks to evaluate our proposed FedAvg variants for quantization robustness and provide a convergence analysis for our Quantization-Aware variants in FL. Our results demonstrate that integrating quantization robustness results in FL models that are significantly more robust to different bit-widths during quantized on-device inference.
Cite
Text
Gupta et al. "Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices." Transactions on Machine Learning Research, 2023.Markdown
[Gupta et al. "Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/gupta2023tmlr-quantization/)BibTeX
@article{gupta2023tmlr-quantization,
title = {{Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices}},
author = {Gupta, Kartik and Fournarakis, Marios and Reisser, Matthias and Louizos, Christos and Nagel, Markus},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/gupta2023tmlr-quantization/}
}