Peri-LN: Revisiting Normalization Layer in the Transformer Architecture
Abstract
Selecting a layer normalization (LN) strategy that stabilizes training and speeds convergence in Transformers remains difficult, even for today’s large language models (LLM). We present a comprehensive analytical foundation for understanding how different LN strategies influence training dynamics in large-scale Transformers. Until recently, Pre-LN and Post-LN have long dominated practices despite their limitations in large-scale training. However, several open-source models have recently begun silently adopting a third strategy without much explanation. This strategy places normalization layer peripherally around sublayers, a design we term Peri-LN. While Peri-LN has demonstrated promising performance, its precise mechanisms and benefits remain almost unexplored. Our in-depth analysis delineates the distinct behaviors of LN strategies, showing how each placement shapes activation variance and gradient propagation. To validate our theoretical insight, we conduct extensive experiments on Transformers up to $3.2$B parameters, showing that Peri-LN consistently achieves more balanced variance growth, steadier gradient flow, and convergence stability. Our results suggest that Peri-LN warrants broader consideration for large-scale Transformer architectures, providing renewed insights into the optimal placement of LN.
Cite
Text
Kim et al. "Peri-LN: Revisiting Normalization Layer in the Transformer Architecture." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Kim et al. "Peri-LN: Revisiting Normalization Layer in the Transformer Architecture." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kim2025icml-periln/)BibTeX
@inproceedings{kim2025icml-periln,
title = {{Peri-LN: Revisiting Normalization Layer in the Transformer Architecture}},
author = {Kim, Jeonghoon and Lee, Byeongchan and Park, Cheonbok and Oh, Yeontaek and Kim, Beomjun and Yoo, Taehwan and Shin, Seongjin and Han, Dongyoon and Shin, Jinwoo and Yoo, Kang Min},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {30400-30436},
volume = {267},
url = {https://mlanthology.org/icml/2025/kim2025icml-periln/}
}