UMoE: Unifying Attention and FFN with Shared Experts
Abstract
Sparse Mixture of Experts (MoE) architectures have emerged as a promising approach for scaling Transformer models. While initial works primarily incorporated MoE into feed-forward network (FFN) layers, recent studies have explored extending the MoE paradigm to attention layers to enhance model performance. However, existing attention-based MoE layers require specialized implementations and demonstrate suboptimal performance compared to their FFN-based counterparts. In this paper, we aim to unify MoE designs in attention and FFN layers by introducing a novel reformulation of the attention mechanism, that reveals an underlying FFN-like structure within attention modules. Our proposed architecture, UMoE, achieves superior performance through attention-based MoE layers while enabling efficient parameter sharing between FFN and attention components.
Cite
Text
Yang et al. "UMoE: Unifying Attention and FFN with Shared Experts." Advances in Neural Information Processing Systems, 2025.Markdown
[Yang et al. "UMoE: Unifying Attention and FFN with Shared Experts." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yang2025neurips-umoe/)BibTeX
@inproceedings{yang2025neurips-umoe,
title = {{UMoE: Unifying Attention and FFN with Shared Experts}},
author = {Yang, Yuanhang and Wang, Chaozheng and Li, Jing},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/yang2025neurips-umoe/}
}