On the Complexity-Faithfulness Trade-Off of Gradient-Based Explanations

Abstract

ReLU networks, while prevalent for visual data, have sharp transitions, sometimes relying on individual pixels for predictions, making vanilla gradient-based explanations noisy and difficult to interpret. Existing methods, such as GradCAM, smooth these explanations by producing surrogate models at the cost of faithfulness. We introduce a unifying spectral framework to systematically analyze and quantify smoothness, faithfulness, and their trade-off in explanations.Using this framework, we quantify and regularize the contribution of ReLU networks to high-frequency information, providing a principled approach to identifying this trade-off. Our analysis characterizes how surrogate-based smoothing distorts explanations, leading to an "explanation gap" that we formally define and measure for different post-hoc methods.Finally, we validate our theoretical findings across different design choices, datasets, and ablations.

Cite

Text

Mehrpanah et al. "On the Complexity-Faithfulness Trade-Off of Gradient-Based Explanations." International Conference on Computer Vision, 2025.

Markdown

[Mehrpanah et al. "On the Complexity-Faithfulness Trade-Off of Gradient-Based Explanations." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/mehrpanah2025iccv-complexityfaithfulness/)

BibTeX

@inproceedings{mehrpanah2025iccv-complexityfaithfulness,
  title     = {{On the Complexity-Faithfulness Trade-Off of Gradient-Based Explanations}},
  author    = {Mehrpanah, Amir and Gamba, Matteo and Smith, Kevin and Azizpour, Hossein},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {3531-3541},
  url       = {https://mlanthology.org/iccv/2025/mehrpanah2025iccv-complexityfaithfulness/}
}