PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding
Abstract
While Large Language Models (LLMs) demonstrate strong performance across domains, their long-context capabilities are limited by transient neural activations causing information decay and unstructured feed-forward network (FFN) weights leading to semantic fragmentation. Inspired by the brain’s working memory and cortical modularity, we propose PaceLLM, featuring two innovations: (1) a Persistent Activity (PA) Mechanism that mimics prefrontal cortex (PFC) neurons’ persistent firing by introducing an activation-level memory bank to dynamically retrieve, reuse, and update critical FFN states, addressing contextual decay; and (2) Cortical Expert (CE) Clustering that emulates task-adaptive neural specialization to reorganize FFN weights into semantic modules, establishing cross-token dependencies and mitigating fragmentation. Extensive evaluations show that PaceLLM achieves 6% improvement on LongBench’s Multi-document QA and 12.5–17.5% performance gains on $\infty$-Bench tasks, while extending measurable context length to 200K tokens in Needle-In-A-Haystack (NIAH) tests. This work pioneers brain-inspired LLM optimization and is complementary to other works. Besides, it can be generalized to any model and enhance their long-context performance and interpretability without structural overhauls.
Cite
Text
Li et al. "PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding." Advances in Neural Information Processing Systems, 2025.Markdown
[Li et al. "PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/li2025neurips-pacellm/)BibTeX
@inproceedings{li2025neurips-pacellm,
title = {{PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding}},
author = {Li, Kangcong and Ye, Peng and Tu, Chongjun and Zhang, Lin and Song, Chunfeng and Wu, Jiamin and Yang, Tao and Zheng, Qihao and Chen, Tao},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/li2025neurips-pacellm/}
}