Implicit Language Models Are RNNs: Balancing Parallelization and Expressivity
Abstract
State-space models (SSMs) and transformers dominate the language modeling landscape. However, they are constrained to a lower computational complexity than classical recurrent neural networks (RNNs), limiting their expressivity. In contrast, RNNs lack parallelization during training, raising fundamental questions about the trade off between parallelization and expressivity. We propose implicit SSMs, which iterate a transformation until convergence to a fixed point. Theoretically, we show that implicit SSMs implement the non-linear state-transitions of RNNs. Empirically, we find that only approximate fixed-point convergence suffices, enabling the design of a scalable training curriculum that largely retains parallelization, with full convergence required only for a small subset of tokens. Our approach demonstrates superior state-tracking capabilities on regular languages, surpassing transformers and SSMs. We further scale implicit SSMs to natural language reasoning tasks and pretraining of large-scale language models up to 1.3B parameters on 207B tokens - representing, to our knowledge, the largest implicit model trained to date. Notably, our implicit models outperform their explicit counterparts on standard benchmarks.
Cite
Text
Schöne et al. "Implicit Language Models Are RNNs: Balancing Parallelization and Expressivity." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.Markdown
[Schöne et al. "Implicit Language Models Are RNNs: Balancing Parallelization and Expressivity." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.](https://mlanthology.org/iclrw/2025/schone2025iclrw-implicit/)BibTeX
@inproceedings{schone2025iclrw-implicit,
title = {{Implicit Language Models Are RNNs: Balancing Parallelization and Expressivity}},
author = {Schöne, Mark and Rahmani, Babak and Kremer, Heiner and Falck, Fabian and Ballani, Hitesh and Gladrow, Jannes},
booktitle = {ICLR 2025 Workshops: LLM_Reason_and_Plan},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/schone2025iclrw-implicit/}
}