Reasoning Is Periodicity? Improving Large Language Models Through Effective Periodicity Modeling
Abstract
Periodicity, as one of the most important basic characteristics, lays the foundation for facilitating structured knowledge acquisition and systematic cognitive processes within human learning paradigms. However, the potential flaws of periodicity modeling in Transformer affect the learning efficiency and establishment of underlying principles from data for large language models (LLMs) built upon it. In this paper, we demonstrate that integrating effective periodicity modeling can improve the learning efficiency and performance of LLMs. We introduce FANformer, which adapts Fourier Analysis Network (FAN) into attention mechanism to achieve efficient periodicity modeling, by modifying the feature projection process of attention mechanism. Extensive experimental results on language modeling show that FANformer consistently outperforms Transformer when scaling up model size and training tokens, underscoring its superior learning efficiency. Our pretrained FANformer-1B exhibits marked improvements on downstream tasks compared to open-source LLMs with similar model parameters or training tokens. Moreover, we reveal that FANformer exhibits superior ability to learn and apply rules for reasoning compared to Transformer. The results position FANformer as an effective and promising architecture for advancing LLMs.
Cite
Text
Dong et al. "Reasoning Is Periodicity? Improving Large Language Models Through Effective Periodicity Modeling." Advances in Neural Information Processing Systems, 2025.Markdown
[Dong et al. "Reasoning Is Periodicity? Improving Large Language Models Through Effective Periodicity Modeling." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/dong2025neurips-reasoning/)BibTeX
@inproceedings{dong2025neurips-reasoning,
title = {{Reasoning Is Periodicity? Improving Large Language Models Through Effective Periodicity Modeling}},
author = {Dong, Yihong and Li, Ge and Jiang, Xue and Tao, Yongding and Zhang, Kechi and Wang, Lecheng and Zhu, Hao and Liu, Huanyu and Ding, Jiazheng and Li, Jia and Deng, Jinliang and Mei, Hong},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/dong2025neurips-reasoning/}
}