Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures
Abstract
Mixture-of-Experts (MoE) architecture offers enhanced efficiency for Large Language Models (LLMs) with modularized computation, yet its inherent sparsity poses significant hardware deployment challenges, including memory locality issues, communication overhead, and inefficient computing resource utilization. Inspired by the modular organization of the human brain, we propose $\texttt{Mozart}$, a novel algorithm-hardware co-design framework tailored for efficient training of MoE-based LLMs on 3.5D wafer-scale chiplet architectures. On the algorithm side, $\texttt{Mozart}$ exploits the inherent modularity of chiplets and introduces: ($1$) an expert allocation strategy that enables efficient on-package all-to-all communication, and ($2$) a fine-grained scheduling mechanism that improves communication-computation overlap through streaming tokens and experts. On the architecture side, $\texttt{Mozart}$ adaptively co-locates heterogeneous modules on specialized chiplets with a 2.5D NoP-Tree topology and hierarchical memory structure. Evaluation across three popular MoE models demonstrates significant efficiency gains, enabling more effective parallelization and resource utilization for large-scale modularized MoE-LLMs.
Cite
Text
Luo et al. "Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures." Advances in Neural Information Processing Systems, 2025.Markdown
[Luo et al. "Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/luo2025neurips-mozart/)BibTeX
@inproceedings{luo2025neurips-mozart,
title = {{Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures}},
author = {Luo, Shuqing and Han, Ye and Li, Pingzhi and Qin, Jiayin and Peng, Jie and Zhao, Yang Katie and Cao, Yu and Chen, Tianlong},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/luo2025neurips-mozart/}
}