FastRM: An Efficient and Automatic Explainability Framework for Multimodal Generative Models
Abstract
Large Vision Language Models (LVLMs) have demonstrated remarkable reasoning capabilities over textual and visual inputs. However, these models remain prone to generating misinformation. Identifying and mitigating ungrounded responses is crucial for developing trustworthy AI. Traditional explainability methods such as gradient-based relevancy maps, offer insight into the decision process of models, but are often computationally expensive and unsuitable for real-time output validation. In this work, we introduce FastRM, an efficient method for predicting explainable Relevancy Maps of LVLMs. Furthermore, FastRM provides both quantitative and qualitative assessment of model confidence. Experimental results demonstrate that FastRM achieves a 99.8% reduction in computation time and a 44.4% reduction in memory footprint compared to traditional relevancy map generation. FastRM allows explainable AI to be more practical and scalable, thereby promoting its deployment in real-world applications and enabling users to more effectively evaluate the reliability of model outputs.
Cite
Text
Stan et al. "FastRM: An Efficient and Automatic Explainability Framework for Multimodal Generative Models." ICLR 2025 Workshops: QUESTION, 2025.Markdown
[Stan et al. "FastRM: An Efficient and Automatic Explainability Framework for Multimodal Generative Models." ICLR 2025 Workshops: QUESTION, 2025.](https://mlanthology.org/iclrw/2025/stan2025iclrw-fastrm/)BibTeX
@inproceedings{stan2025iclrw-fastrm,
title = {{FastRM: An Efficient and Automatic Explainability Framework for Multimodal Generative Models}},
author = {Stan, Gabriela Ben-Melech and Aflalo, Estelle and Luo, Man and Rosenman, Shachar and Le, Tiep and Paul, Sayak and Tseng, Shao-Yen and Lal, Vasudev},
booktitle = {ICLR 2025 Workshops: QUESTION},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/stan2025iclrw-fastrm/}
}