Probabilistic Stability Guarantees for Feature Attributions
Abstract
Stability guarantees have emerged as a principled way to evaluate feature attributions, but existing certification methods rely on heavily smoothed classifiers and often produce conservative guarantees. To address these limitations, we introduce soft stability and propose a simple, model-agnostic, sample-efficient stability certification algorithm (SCA) that yields non-trivial and interpretable guarantees for any attribution method. Moreover, we show that mild smoothing achieves a more favorable trade-off between accuracy and stability, avoiding the aggressive compromises made in prior certification methods. To explain this behavior, we use Boolean function analysis to derive a novel characterization of stability under smoothing. We evaluate SCA on vision and language tasks and demonstrate the effectiveness of soft stability in measuring the robustness of explanation methods.
Cite
Text
Jin et al. "Probabilistic Stability Guarantees for Feature Attributions." Advances in Neural Information Processing Systems, 2025.Markdown
[Jin et al. "Probabilistic Stability Guarantees for Feature Attributions." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/jin2025neurips-probabilistic/)BibTeX
@inproceedings{jin2025neurips-probabilistic,
title = {{Probabilistic Stability Guarantees for Feature Attributions}},
author = {Jin, Helen and Xue, Anton and You, Weiqiu and Goel, Surbhi and Wong, Eric},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/jin2025neurips-probabilistic/}
}