Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features
Abstract
Graph Neural Networks (GNNs) excel at jointly modeling node features and topology, yet their black-box nature limits their adoption in real-world applications where interpretability is desired. Inspired by the success of interpretable Neural Additive Models (NAM) for tabular data, Graph Neural Additive Network (GNAN) extends the additive modeling approach to graph data to overcome limitations of GNNs. While being interpretable, GNAN representation learning overlooks the importance of local aggregation and more importantly suffers from parameter complexity. To mitigate the above challenges, we introduce Graph Neural Additive Model with Random Fourier Features (G-NAMRFF), a lightweight, self‐interpretable graph additive architecture. G-NAMRFF represents each node embedding as the sum of feature‐wise contributions where contributions are modeled via a Gaussian process (GP) with a graph- and feature-aware kernel. Specifically, we construct a kernel using Radial Basis Function (RBF) with graph structure induced by Laplacian and learnable Finite Impulse Response (FIR) filter. We approximate the kernel using Random Fourier Features (RFFs) which transforms the GP prior to a Bayesian formulation, which are subsequently learnt using a single layer neural network with size equal to number of RFF features. G-NAMRFF is light weight with $168\times$ fewer parameters compared to GNAN. Despite its compact size, G-NAMRFF matches or outperforms state-of-the-art GNNs and GNAN on node and graph classification tasks, delivering real-time interpretability without sacrificing accuracy.
Cite
Text
Reddy et al. "Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features." Advances in Neural Information Processing Systems, 2025.Markdown
[Reddy et al. "Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/reddy2025neurips-interpretable/)BibTeX
@inproceedings{reddy2025neurips-interpretable,
title = {{Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features}},
author = {Reddy, Thummaluru Siddartha and Saketh, Vempalli Naga Sai and Chandran, Mahesh},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/reddy2025neurips-interpretable/}
}