Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data
Abstract
Large Multimodal Models (LMMs), or Vision-Language Models (VLMs), have shown impressive capabilities in a wide range of visual tasks. However, they often struggle with fine-grained visual reasoning, failing to identify domain-specific objectives and provide justifiable explanations for their predictions. To address the above challenge, we propose a novel visual rejection sampling framework to improve the cognition and explainability of LMMs using self-synthesized data. Specifically, visual fine-tuning requires images, queries, and target answers. Our approach begins by synthesizing interpretable answers that include human-verifiable visual features. These features are based on expert-defined concepts, and carefully selected based on their alignment with the image content. After each round of fine-tuning, we apply a reward model-free filtering mechanism to select the highest-quality interpretable answers for the next round of tuning. This iterative process of synthetic data generation and fine-tuning progressively improves the model's ability to generate accurate and reasonable explanations. Experimental results demonstrate the effectiveness of our method in improving both the accuracy and explainability of specialized visual classification tasks.
Cite
Text
Shi et al. "Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data." International Conference on Learning Representations, 2025.Markdown
[Shi et al. "Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/shi2025iclr-enhancing/)BibTeX
@inproceedings{shi2025iclr-enhancing,
title = {{Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data}},
author = {Shi, Yucheng and Li, Quanzheng and Sun, Jin and Li, Xiang and Liu, Ninghao},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/shi2025iclr-enhancing/}
}