Sanity Simulations for Saliency Methods
Abstract
Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model’s predictive reasoning by identifying "important" pixels in an input image. However, the development and adoption of these methods are hindered by the lack of access to ground-truth model reasoning, which prevents accurate evaluation. In this work, we design a synthetic benchmarking framework, SMERF, that allows us to perform ground-truth-based evaluation while controlling the complexity of the model’s reasoning. Experimentally, SMERF reveals significant limitations in existing saliency methods and, as a result, represents a useful tool for the development of new saliency methods.
Cite
Text
Kim et al. "Sanity Simulations for Saliency Methods." International Conference on Machine Learning, 2022.Markdown
[Kim et al. "Sanity Simulations for Saliency Methods." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/kim2022icml-sanity/)BibTeX
@inproceedings{kim2022icml-sanity,
title = {{Sanity Simulations for Saliency Methods}},
author = {Kim, Joon Sik and Plumb, Gregory and Talwalkar, Ameet},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {11173-11200},
volume = {162},
url = {https://mlanthology.org/icml/2022/kim2022icml-sanity/}
}