DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Abstract

We propose DocVXQA, a novel framework for visually self-explainable document question answering, where the goal is not only to produce accurate answers to questions but also to learn visual heatmaps that highlight critical regions, offering interpretable justifications for the model decision. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning criteria. Unlike conventional relevance map methods that solely emphasize regions relevant to the answer, our context-aware DocVXQA delivers explanations that are contextually sufficient yet representation-efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in document visual question answering applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.

Cite

Text

Souibgui et al. "DocVXQA: Context-Aware Visual Explanations for Document Question Answering." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Souibgui et al. "DocVXQA: Context-Aware Visual Explanations for Document Question Answering." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/souibgui2025icml-docvxqa/)

BibTeX

@inproceedings{souibgui2025icml-docvxqa,
  title     = {{DocVXQA: Context-Aware Visual Explanations for Document Question Answering}},
  author    = {Souibgui, Mohamed Ali and Choi, Changkyu and Barsky, Andrey and Jung, Kangsoo and Valveny, Ernest and Karatzas, Dimosthenis},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {56549-56569},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/souibgui2025icml-docvxqa/}
}