Masked Gated Linear Unit

Abstract

Gated Linear Units (GLUs) have become essential components in the feed-forward networks of state-of-the-art Large Language Models (LLMs). However, they require twice as many memory reads compared to feed-forward layers without gating, due to the use of separate weight matrices for the gate and value streams. To address this bottleneck, we introduce Masked Gated Linear Units (MGLUs), a novel family of GLUs with an efficient kernel implementation. The core contribution of MGLUs include: (1) the Mixture of Element-wise Gating (MoEG) architecture that learns multiple binary masks, each determining gate or value assignments at the element level on a single shared weight matrix resulting in reduced memory transfer, and (2) FlashMGLU, a hardware-friendly kernel that yields up to a 19.7$\times$ inference-time speed-up over a na\"ive PyTorch MGLU and is 47\% more memory-efficient and 34\% faster than standard GLUs despite added architectural complexity on an RTX5090 GPU. In LLM experiments, the Swish-activated variant SwiMGLU preserves its memory advantages while matching—or even surpassing—the downstream accuracy of the SwiGLU baseline.

Cite

Text

Tajima et al. "Masked Gated Linear Unit." Advances in Neural Information Processing Systems, 2025.

Markdown

[Tajima et al. "Masked Gated Linear Unit." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/tajima2025neurips-masked/)

BibTeX

@inproceedings{tajima2025neurips-masked,
  title     = {{Masked Gated Linear Unit}},
  author    = {Tajima, Yukito and Inoue, Nakamasa and Sekikawa, Yusuke and Sato, Ikuro and Yokota, Rio},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/tajima2025neurips-masked/}
}