Fair Resource Allocation in Federated Learning

Abstract

Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.

Cite

Text

Li et al. "Fair Resource Allocation in Federated Learning." International Conference on Learning Representations, 2020.

Markdown

[Li et al. "Fair Resource Allocation in Federated Learning." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/li2020iclr-fair/)

BibTeX

@inproceedings{li2020iclr-fair,
  title     = {{Fair Resource Allocation in Federated Learning}},
  author    = {Li, Tian and Sanjabi, Maziar and Beirami, Ahmad and Smith, Virginia},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/li2020iclr-fair/}
}