SHAP Values via Sparse Fourier Representation
Abstract
SHAP (SHapley Additive exPlanations) values are a widely used method for local feature attribution in interpretable and explainable AI. We propose an efficient two-stage algorithm for computing SHAP values in both black-box setting and tree-based models. Motivated by spectral bias in real-world predictors, we first approximate models using compact Fourier representations, exactly for trees and approximately for black-box models. In the second stage, we introduce a closed-form formula for {\em exactly} computing SHAP values using the Fourier representation, that ``linearizes'' the computation into a simple summation and is amenable to parallelization. As the Fourier approximation is computed only once, our method enables amortized SHAP value computation, achieving significant speedups over existing methods and a tunable trade-off between efficiency and precision.
Cite
Text
Gorji et al. "SHAP Values via Sparse Fourier Representation." Advances in Neural Information Processing Systems, 2025.Markdown
[Gorji et al. "SHAP Values via Sparse Fourier Representation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/gorji2025neurips-shap/)BibTeX
@inproceedings{gorji2025neurips-shap,
title = {{SHAP Values via Sparse Fourier Representation}},
author = {Gorji, Ali and Amrollahi, Andisheh and Krause, Andreas},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/gorji2025neurips-shap/}
}