Conditional Moment Alignment for Improved Generalization in Federated Learning

Abstract

In this work, we study model heterogeneous Federated Learning (FL) for classification where different clients have different model architectures. Unlike existing works on model heterogeneity, we neither require access to a public dataset nor do we impose constraints on the model architecture of clients and ensure that the clients' models and data are private. We prove a generalization result, that provides fundamental insights into the role of the representations in FL and propose a theoretically grounded algorithm Federated Conditional Moment Alignment (FedCMA) that aligns class conditional distributions of each client in the feature space. We prove the convergence and empirically, we show that \pap outperforms other baselines on CIFAR-10, MNIST, EMNIST, FEMNIST in the considered setting.

Cite

Text

Regatti et al. "Conditional Moment Alignment for Improved Generalization in Federated Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.

Markdown

[Regatti et al. "Conditional Moment Alignment for Improved Generalization in Federated Learning." NeurIPS 2022 Workshops: Federated_Learning, 2022.](https://mlanthology.org/neuripsw/2022/regatti2022neuripsw-conditional/)

BibTeX

@inproceedings{regatti2022neuripsw-conditional,
  title     = {{Conditional Moment Alignment for Improved Generalization in Federated Learning}},
  author    = {Regatti, Jayanth Reddy and Lu, Songtao and Gupta, Abhishek and Shroff, Ness},
  booktitle = {NeurIPS 2022 Workshops: Federated_Learning},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/regatti2022neuripsw-conditional/}
}