Continual Pre-Training of MoEs: How Robust Is Your Router?

Abstract

Sparsely-activated Mixture of Experts (MoE) transformers are promising architectures for foundation models. Compared to dense transformers that require the same amount of floating-point operations (FLOPs) per forward pass, MoEs benefit from improved sample efficiency at training time and achieve much stronger performance. Many closed-source and open-source frontier language models have thus adopted an MoE architecture. Naturally, practitioners will want to extend the capabilities of these models with large amounts of newly collected data without completely re-training them. Prior work has shown that a simple combination of replay, learning rate re-warming, and re-decaying can enable the continual pre-training (CPT) of dense decoder-only transformers with minimal performance degradation compared to full re-training. In the case of decoder-only MoE transformers, however, it is unclear how the routing algorithm will impact continual pre-training performance: 1) *do the MoE transformer's routers exacerbate forgetting relative to a dense model?*; 2) *do the routers maintain a balanced load on previous distributions after CPT?*; 3) *are the same strategies applied to dense models sufficient to continually pre-train MoE LLMs?* In what follows, we conduct a large-scale study training a 500M parameter dense transformer and four 500M-active/2B-total parameter MoE transformers, following the Switch Transformer architecture and a granular DeepSeek-inspired architecture. Each model is trained for 600B tokens. Our results establish a surprising robustness to distribution shifts for MoEs using both Sinkhorn-Balanced and Z-and-Aux-loss-balanced routing algorithms, even in MoEs continually pre-trained without replay. Moreover, we show that MoE LLMs maintain their sample efficiency (relative to a FLOP-matched dense model) during CPT and that they can match the performance of a fully re-trained MoE at a fraction of the cost.

Cite

Text

Thérien et al. "Continual Pre-Training of MoEs: How Robust Is Your Router?." Transactions on Machine Learning Research, 2025.

Markdown

[Thérien et al. "Continual Pre-Training of MoEs: How Robust Is Your Router?." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/therien2025tmlr-continual/)

BibTeX

@article{therien2025tmlr-continual,
  title     = {{Continual Pre-Training of MoEs: How Robust Is Your Router?}},
  author    = {Thérien, Benjamin and Joseph, Charles-Étienne and Sarwar, Zain and Panda, Ashwinee and Das, Anirban and Zhang, Shi-Xiong and Rawls, Stephen and Sahu, Sambit and Belilovsky, Eugene and Rish, Irina},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/therien2025tmlr-continual/}
}