[Re] Improving Interpretation Faithfulness for Vision Transformers
Abstract
This work aims to reproduce the results of Faithful Vision Transformers (FViTs) proposed by Hu et al. (2024) alongside interpretability methods for Vision Transformers from Chefer et al. (2021) and Xu et al. (2022). We investigate claims made by Hu et al. (2024), namely that the usage of Diffusion Denoised Smoothing (DDS) improves interpretability robustness to (1) attacks in a segmentation task and (2) perturbation and attacks in a classification task. We also extend the original study by investigating the authors’ claims that adding DDS to any interpretability method can improve its robustness under attack. This is tested on baseline methods and the recently proposed Attribution Rollout method. In addition, we measure the computational costs and environmental impact of obtaining an FViT through DDS. Our results broadly agree with the original study’s findings, although minor discrepancies were found and discussed.
Cite
Text
Kurek et al. "[Re] Improving Interpretation Faithfulness for Vision Transformers." Transactions on Machine Learning Research, 2025.Markdown
[Kurek et al. "[Re] Improving Interpretation Faithfulness for Vision Transformers." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kurek2025tmlr-re/)BibTeX
@article{kurek2025tmlr-re,
title = {{[Re] Improving Interpretation Faithfulness for Vision Transformers}},
author = {Kurek, Izabela and Trejter, Wojciech and Frkovic, Stipe and Erdelez, Andro},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/kurek2025tmlr-re/}
}