Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity
Abstract
Linear recurrent neural networks enable powerful long-range sequence modeling with constant memory usage and time-per-token during inference. These architectures hold promise for streaming applications at the edge, but deployment in resource-constrained environments requires hardware-aware optimizations to minimize latency and energy consumption. Unstructured sparsity offers a compelling solution, enabling substantial reductions in compute and memory requirements–when accelerated by compatible hardware platforms. In this paper, we conduct a scaling study to investigate the Pareto front of performance and efficiency across inference compute budgets. We find that highly sparse linear RNNs consistently achieve better efficiency-performance trade-offs than dense baselines, with $2\times$ less compute and $36$% less memory at iso-accuracy. Our models achieve state-of-the-art results on a real-time streaming task for audio denoising. By quantizing our sparse models to fixed-point arithmetic and deploying them on the Intel Loihi 2 neuromorphic chip for real-time processing, we translate model compression into tangible gains of $42\times$ lower latency and $149\times$ lower energy consumption compared to a dense model on an edge GPU. Our findings showcase the transformative potential of unstructured sparsity, paving the way for highly efficient recurrent neural networks in real-world, resource-constrained environments.
Cite
Text
Pierro et al. "Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Pierro et al. "Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/pierro2025icml-accelerating/)BibTeX
@inproceedings{pierro2025icml-accelerating,
title = {{Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity}},
author = {Pierro, Alessandro and Abreu, Steven and Timcheck, Jonathan and Stratmann, Philipp and Wild, Andreas and Shrestha, Sumit Bam},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {49382-49398},
volume = {267},
url = {https://mlanthology.org/icml/2025/pierro2025icml-accelerating/}
}