Federated Learning Under Covariate Shifts with Generalization Guarantees
Abstract
This paper addresses intra-client and inter-client covariate shifts in federated learning (FL) with a focus on the overall generalization performance. To handle covariate shifts, we formulate a new global model training paradigm and propose Federated Importance-Weighted Empirical Risk Minimization (FTW-ERM) along with improving density ratio matching methods without requiring perfect knowledge of the supremum over true ratios. We also propose the communication-efficient variant FITW-ERM with the same level of privacy guarantees as those of classical ERM in FL. We theoretically show that FTW-ERM achieves smaller generalization error than classical ERM under certain settings. Experimental results demonstrate the superiority of FTW-ERM over existing FL baselines in challenging imbalanced federated settings in terms of data distribution shifts across clients.
Cite
Text
Ramezani-Kebrya et al. "Federated Learning Under Covariate Shifts with Generalization Guarantees." Transactions on Machine Learning Research, 2023.Markdown
[Ramezani-Kebrya et al. "Federated Learning Under Covariate Shifts with Generalization Guarantees." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/ramezanikebrya2023tmlr-federated/)BibTeX
@article{ramezanikebrya2023tmlr-federated,
title = {{Federated Learning Under Covariate Shifts with Generalization Guarantees}},
author = {Ramezani-Kebrya, Ali and Liu, Fanghui and Pethick, Thomas and Chrysos, Grigorios and Cevher, Volkan},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/ramezanikebrya2023tmlr-federated/}
}