Long Range Language Modeling via Gated State Spaces
Abstract
State space models have shown to be effective at modeling long range dependencies, specially on sequence classification tasks. In this work we focus on autoregressive sequence modeling over English books, Github source code and ArXiv mathematics articles. Based on recent developments around the effectiveness of gated activation functions, we propose a new layer named \textit{Gated State Space} (GSS) and show that it trains significantly faster than the diagonal version of S4 (i.e. DSS) on TPUs, is fairly competitive with several well-tuned Transformer-based baselines and exhibits zero-shot generalization to longer inputs while being straightforward to implement. Finally, we show that leveraging self-attention to model local dependencies improves the performance of GSS even further.
Cite
Text
Mehta et al. "Long Range Language Modeling via Gated State Spaces." International Conference on Learning Representations, 2023.Markdown
[Mehta et al. "Long Range Language Modeling via Gated State Spaces." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/mehta2023iclr-long/)BibTeX
@inproceedings{mehta2023iclr-long,
title = {{Long Range Language Modeling via Gated State Spaces}},
author = {Mehta, Harsh and Gupta, Ankit and Cutkosky, Ashok and Neyshabur, Behnam},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/mehta2023iclr-long/}
}