FFN Fusion: Rethinking Sequential Computation in Large Language Models

Abstract

We introduce \textit{FFN Fusion}, an architectural optimization technique that reduces sequential computation in large language models by identifying and exploiting natural opportunities for parallelization. Our key insight is that sequences of Feed-Forward Network (FFN) layers, particularly those remaining after the removal of specific attention layers, can often be parallelized with minimal accuracy impact. We develop a principled methodology for identifying and fusing such sequences, transforming them into parallel operations that significantly reduce inference latency while preserving model behavior. Applying these techniques to Llama-3.1-405B-Instruct, we create a 253B model (253B-Base), an efficient and soon-to-be publicly available model that achieves a 1.71$\times$ speedup in inference latency and 35$\times$ lower per-token cost while maintaining strong performance across benchmarks. Most intriguingly, we find that even full transformer blocks containing both attention and FFN layers can sometimes be parallelized, suggesting new directions for neural architecture design.

Cite

Text

Bercovich et al. "FFN Fusion: Rethinking Sequential Computation in Large Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Bercovich et al. "FFN Fusion: Rethinking Sequential Computation in Large Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/bercovich2025neurips-ffn/)

BibTeX

@inproceedings{bercovich2025neurips-ffn,
  title     = {{FFN Fusion: Rethinking Sequential Computation in Large Language Models}},
  author    = {Bercovich, Akhiad and Dabbah, Mohammed and Puny, Omri and Galil, Ido and Geifman, Amnon and Geifman, Yonatan and Golan, Izhak and Karpas, Ehud Dov and Levy, Itay and Moshe, Zach and Nabwani, Najeeb and Ronen, Tomer and Schen, Itamar and Shahaf, Ido and Tropp, Oren and Zilberstein, Ran and El-Yaniv, Ran},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/bercovich2025neurips-ffn/}
}