Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts

Abstract

The traditional viewpoint on Sparse Mixture of Experts (MoE) models is that instead of training a single _large_ expert, which is computationally expensive, we can train many _small_ experts. The hope is that if the total parameter count of the small experts equals that of the singular large expert, then we retain the representation power of the large expert while gaining computational tractability and promoting expert specialization. The recently introduced Soft MoE replaces the Sparse MoE's discrete routing mechanism with a differentiable gating function that smoothly mixes tokens. While this smooth gating function successfully mitigates the various training instabilities associated with Sparse MoE, it is unclear whether it induces implicit biases that affect Soft MoE's representation power or potential for expert specialization. We prove that Soft MoE with a single arbitrarily powerful expert cannot represent simple convex functions. This justifies that Soft MoE's success cannot be explained by the traditional viewpoint of many small experts collectively mimicking the representation power of a single large expert, and that multiple experts are actually _necessary_ to achieve good representation power (even for a fixed total parameter count). Continuing along this line of investigation, we introduce a notion of expert specialization for Soft MoE, and while varying the number of experts yet fixing the total parameter count, we consider the following (computationally intractable) task. Given any input, how can we discover the expert subset that is specialized to predict this input's label? We empirically show that when there are many small experts, the architecture is implicitly biased in a fashion that allows us to efficiently approximate the specialized expert subset. Our method can be easily implemented to potentially reduce computation during inference. For example, using our method on ImageNet, one can perform inference using only $1/8$ of the experts and still retain $99$% of the test accuracy of using all experts.

Cite

Text

Chung et al. "Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts." Transactions on Machine Learning Research, 2025.

Markdown

[Chung et al. "Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/chung2025tmlr-beyond/)

BibTeX

@article{chung2025tmlr-beyond,
  title     = {{Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts}},
  author    = {Chung, Youngseog and Malik, Dhruv and Schneider, Jeff and Li, Yuanzhi and Singh, Aarti},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/chung2025tmlr-beyond/}
}