Sparse Neural Architectures via Deterministic Ramanujan Graphs

Abstract

We present a method to construct sparse neural networks using the theory of expander graphs. Expanders are sparse but well connected graph structures that are used for designing resilient networks. A Ramanujan graph is an extremal expander in terms of the spectral gap of its eigenvalues. In this work, bipartite Ramanujan expanders are deterministically constructed and used as connection structures of the convolutional and fully connected layers of a neural network. The Ramanujan graphs occur either as Cayley graphs of certain algebraic groups or as Ramanujan $r$-coverings of the full $(k,l)$ bi-regular bipartite graph on $k + l$ vertices. The proposed sparse networks are found to provide comparable performance to a fully dense network on benchmark datasets achieving an extremely low network density.

Cite

Text

Kalra et al. "Sparse Neural Architectures via Deterministic Ramanujan Graphs." Transactions on Machine Learning Research, 2025.

Markdown

[Kalra et al. "Sparse Neural Architectures via Deterministic Ramanujan Graphs." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kalra2025tmlr-sparse/)

BibTeX

@article{kalra2025tmlr-sparse,
  title     = {{Sparse Neural Architectures via Deterministic Ramanujan Graphs}},
  author    = {Kalra, Suryam Arnav and Biswas, Arindam and Mitra, Pabitra and Basu, Biswajit},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/kalra2025tmlr-sparse/}
}