Distributed Architecture Search over Heterogeneous Distributions

Abstract

Federated learning (FL) is an efficient learning framework that assists distributed machine learning when data cannot be shared with a centralized server. Recent advancements in FL use predefined architecture-based learning for all clients. However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL. Motivated by this challenge, we introduce SPIDER, an algorithmic framework that aims to Search PersonalIzed neural architecture for feDERated learning. SPIDER is designed based on two unique features: (1) alternately optimizing one architecture-homogeneous global model in a generic FL manner and architecture-heterogeneous local models that are connected to the global model by weight-sharing-based regularization, (2) achieving architecture-heterogeneous local models by a perturbation-based neural architecture search method. Experimental results demonstrate superior prediction performance compared with other state-of-the-art personalization methods.

Cite

Text

Mushtaq et al. "Distributed Architecture Search over Heterogeneous Distributions." Transactions on Machine Learning Research, 2023.

Markdown

[Mushtaq et al. "Distributed Architecture Search over Heterogeneous Distributions." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/mushtaq2023tmlr-distributed/)

BibTeX

@article{mushtaq2023tmlr-distributed,
  title     = {{Distributed Architecture Search over Heterogeneous Distributions}},
  author    = {Mushtaq, Erum and He, Chaoyang and Ding, Jie and Avestimehr, Salman},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/mushtaq2023tmlr-distributed/}
}