Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient

Abstract

Mixture of Experts (MoE) architectures have significantly increased computational efficiency in both research and real-world applications of large-scale machine learning models. However, their scalability and efficiency under memory constraints remain relatively underexplored. In this work, we present joint scaling laws for dense and MoE models, incorporating key factors such as the number of active parameters, dataset size, and the number of experts. Our findings provide a principled framework for selecting the optimal MoE configuration under fixed memory and compute budgets. Surprisingly, we show that MoE models can be more memory-efficient than dense models, contradicting conventional wisdom. Extensive empirical validation confirms the theoretical predictions of our scaling laws. These results offer actionable insights for designing and deploying MoE models in practical large-scale training scenarios.

Cite

Text

Ludziejewski et al. "Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient." ICLR 2025 Workshops: SLLM, 2025.

Markdown

[Ludziejewski et al. "Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient." ICLR 2025 Workshops: SLLM, 2025.](https://mlanthology.org/iclrw/2025/ludziejewski2025iclrw-joint/)

BibTeX

@inproceedings{ludziejewski2025iclrw-joint,
  title     = {{Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient}},
  author    = {Ludziejewski, Jan and Pióro, Maciej and Krajewski, Jakub and Krutul, Michał and Małaśnicki, Jan and Stefaniak, Maciej and Sankowski, Piotr and Cygan, Marek and Adamczewski, Kamil and Miłoś, Piotr and Jaszczur, Sebastian},
  booktitle = {ICLR 2025 Workshops: SLLM},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/ludziejewski2025iclrw-joint/}
}