Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings
Abstract
Mixture-of-Experts (MoEs) achieve scalability by dynamically activating subsets of their components. Yet, understanding how expertise emerges through joint training of gating mechanisms and experts remains incomplete, especially in scenarios without clear task partitions. Motivated by inference costs and data heterogeneity, we study how joint training of gating functions and experts can dynamically allocate domain-specific expertise across multiple underlying data distributions. As an outcome of our framework, we develop an instance tailored specifically to decentralized training scenarios, introducing *Dynamically Decentralized Orchestration of MoEs* or *DDOME*. *DDOME* leverages heterogeneity emerging from distributional shifts across decentralized data sources to specialize experts dynamically. By integrating a pretrained common expert to inform a gating function, *DDOME* achieves personalized expert subset selection on-the-fly, facilitating just-in-time personalization. We empirically validate *DDOME* within a Federated Learning (FL) context: *DDOME* attains from 4\% up to an 24\% accuracy improvement over state-of-the-art FL baselines in image and text classification tasks, while maintaining competitive zero-shot generalization capabilities. Furthermore, we provide theoretical insights confirming that the joint gating-experts training is critical for achieving meaningful expert specialization.
Cite
Text
Farhat et al. "Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings." Advances in Neural Information Processing Systems, 2025.Markdown
[Farhat et al. "Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/farhat2025neurips-learning/)BibTeX
@inproceedings{farhat2025neurips-learning,
title = {{Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings}},
author = {Farhat, Yehya and Shili, Hamza ElMokhtar and Liao, Fangshuo and Dun, Chen and Garcia, Mirian Del Carmen Hipolito and Zheng, Guoqing and Awadallah, Ahmed Hassan and Sim, Robert and Dimitriadis, Dimitrios and Kyrillidis, Anastasios},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/farhat2025neurips-learning/}
}