SAS: Simulated Attention Score

Abstract

The attention mechanism is a core component of the Transformer architecture. Various methods have been developed to compute attention scores, including multi-head attention (MHA), multi-query attention, group-query attention and so on. We further analyze the MHA and observe that its performance improves as the number of attention heads increases, provided the hidden size per head remains sufficiently large. Therefore, increasing both the head count and hidden size per head with minimal parameter overhead can lead to significant performance gains at a low cost. Motivated by this insight, we introduce Simulated Attention Score (SAS), which **maintains a compact model size while simulating a larger number of attention heads and hidden feature dimension per head.** This is achieved by projecting a low-dimensional head representation into a higher-dimensional space, effectively increasing attention capacity without increasing parameter count. Beyond the head representations, we further extend the simulation approach to feature dimension of the key and query embeddings, enhancing expressiveness by mimicking the behavior of a larger model while preserving the original model size. **To control the parameter cost, we also propose Parameter-Efficient Attention Aggregation (PEAA).** Comprehensive experiments on a variety of datasets and tasks demonstrate the effectiveness of the proposed SAS method, achieving significant improvements over different attention variants.

Cite

Text

Zheng et al. "SAS: Simulated Attention Score." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zheng et al. "SAS: Simulated Attention Score." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zheng2025neurips-sas/)

BibTeX

@inproceedings{zheng2025neurips-sas,
  title     = {{SAS: Simulated Attention Score}},
  author    = {Zheng, Chuanyang and Sun, Jiankai and Gao, Yihang and Wang, Yuehao and Wang, Peihao and Xiong, Jing and Ren, Liliang and Cheng, Hao and Kulkarni, Janardhan and Shen, Yelong and Wang, Zhangyang and Schwager, Mac and Schneider, Anderson and Liu, Xiaodong and Gao, Jianfeng},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zheng2025neurips-sas/}
}