Sparse MoEs Meet Efficient Ensembles

Abstract

Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models. We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs). First, we show that the two approaches have complementary features whose combination is beneficial. This includes a comprehensive evaluation of sparse MoEs in uncertainty related benchmarks. Then, we present efficient ensemble of experts (E$^3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble. Extensive experiments demonstrate the accuracy, log-likelihood, few-shot learning, robustness, and uncertainty improvements of E$^3$ over several challenging vision Transformer-based baselines. E$^3$ not only preserves its efficiency while scaling to models with up to 2.7B parameters, but also provides better predictive performance and uncertainty estimates for larger models.

Cite

Text

Allingham et al. "Sparse MoEs Meet Efficient Ensembles." Transactions on Machine Learning Research, 2022.

Markdown

[Allingham et al. "Sparse MoEs Meet Efficient Ensembles." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/allingham2022tmlr-sparse/)

BibTeX

@article{allingham2022tmlr-sparse,
  title     = {{Sparse MoEs Meet Efficient Ensembles}},
  author    = {Allingham, James Urquhart and Wenzel, Florian and Mariet, Zelda E and Mustafa, Basil and Puigcerver, Joan and Houlsby, Neil and Jerfel, Ghassen and Fortuin, Vincent and Lakshminarayanan, Balaji and Snoek, Jasper and Tran, Dustin and Ruiz, Carlos Riquelme and Jenatton, Rodolphe},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/allingham2022tmlr-sparse/}
}