Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations

Abstract

Linear recurrent neural networks (RNNs) and state-space models (SSMs) such as Mamba have become promising alternatives to softmax-attention as sequence mixing layers in Transformer architectures. Current models, however, do not exhibit the full state-tracking expressivity of RNNs because they rely on channel-wise (i.e. diagonal) sequence mixing. In this paper, we propose to compute a dense linear RNN as the fixed-point of a parallelizable diagonal linear RNN in a single layer. We explore mechanisms to improve its memory and state-tracking abilities in practice, and achieve state-of-the-art results on the commonly used toy tasks $A_5$, $S_5$, copying, and modular arithmetics. We hope our results will open new avenues to more expressive and efficient sequence mixers.

Cite

Text

Movahedi et al. "Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations." ICLR 2025 Workshops: SCOPE, 2025.

Markdown

[Movahedi et al. "Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations." ICLR 2025 Workshops: SCOPE, 2025.](https://mlanthology.org/iclrw/2025/movahedi2025iclrw-fixedpoint/)

BibTeX

@inproceedings{movahedi2025iclrw-fixedpoint,
  title     = {{Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations}},
  author    = {Movahedi, Sajad and Sarnthein, Felix and Cirone, Nicola Muca and Orvieto, Antonio},
  booktitle = {ICLR 2025 Workshops: SCOPE},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/movahedi2025iclrw-fixedpoint/}
}