Statistical Significance of Feature Importance Rankings

Abstract

Feature importance scores are ubiquitous tools for understanding the predictions of machine learning models. However, many popular attribution methods suffer from high instability due to random sampling. Leveraging novel ideas from hypothesis testing, we devise techniques that ensure the most important features are correct with high-probability guarantees. These are capable of assessing both the set of $K$ top-ranked features as well as the order of its elements. Given local or global importance scores, we demonstrate how to retrospectively verify the stability of the highest ranks. We then introduce two efficient sampling algorithms that identify the $K$ most important features, perhaps in order, with probability at least $1-\alpha$. The theoretical justification for these procedures is validated empirically on SHAP and LIME.

Cite

Text

Goldwasser and Hooker. "Statistical Significance of Feature Importance Rankings." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.

Markdown

[Goldwasser and Hooker. "Statistical Significance of Feature Importance Rankings." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/goldwasser2025uai-statistical/)

BibTeX

@inproceedings{goldwasser2025uai-statistical,
  title     = {{Statistical Significance of Feature Importance Rankings}},
  author    = {Goldwasser, Jeremy and Hooker, Giles},
  booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
  year      = {2025},
  pages     = {1476-1496},
  volume    = {286},
  url       = {https://mlanthology.org/uai/2025/goldwasser2025uai-statistical/}
}