Theory, Analysis, and Best Practices for Sigmoid Self-Attention
Abstract
Attention is a key part of the transformer architecture. It is a sequence-to-sequence mapping that transforms each sequence element into a weighted sum of values. The weights are typically obtained as the softmax of dot products between keys and queries. Recent work has explored alternatives to softmax attention in transformers, such as ReLU and sigmoid activations. In this work, we revisit sigmoid attention and conduct an in-depth theoretical and empirical analysis. Theoretically, we prove that transformers with sigmoid attention are universal function approximators and benefit from improved regularity compared to softmax attention. Through detailed empirical analysis, we identify stabilization of large initial attention norms during the early stages of training as a crucial factor for the successful training of models with sigmoid attention, outperforming prior attempts. We also introduce FLASHSIGMOID, a hardware-aware and memory-efficient implementation of sigmoid attention yielding a 17% inference kernel speed-up over FLASHATTENTION2 on H100 GPUs. Experiments across language, vision, and speech show that properly normalized sigmoid attention matches the strong performance of softmax attention on a wide range of domains and scales, which previous attempts at sigmoid attention were unable to fully achieve. Our work unifies prior art and establishes best practices for sigmoid attention as a drop-in softmax replacement in transformers.
Cite
Text
Ramapuram et al. "Theory, Analysis, and Best Practices for Sigmoid Self-Attention." International Conference on Learning Representations, 2025.Markdown
[Ramapuram et al. "Theory, Analysis, and Best Practices for Sigmoid Self-Attention." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/ramapuram2025iclr-theory/)BibTeX
@inproceedings{ramapuram2025iclr-theory,
title = {{Theory, Analysis, and Best Practices for Sigmoid Self-Attention}},
author = {Ramapuram, Jason and Danieli, Federico and Dhekane, Eeshan Gunesh and Weers, Floris and Busbridge, Dan and Ablin, Pierre and Likhomanenko, Tatiana and Digani, Jagrit and Gu, Zijin and Shidani, Amitis and Webb, Russell},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/ramapuram2025iclr-theory/}
}