Multimodal Commonsense Knowledge Distillation for Visual Question Answering (Student Abstract)

Abstract

Existing Multimodal Large Language Models (MLLMs) and Visual Language Pretrained Models (VLPMs) have shown remarkable performances in general Visual Question Answering (VQA). However, these models struggle with VQA questions that require external commonsense knowledge due to the challenges in generating high-quality prompts and the high computational costs of fine-tuning. In this work, we propose a novel graph-based multimodal commonsense knowledge distillation framework that constructs a unified relational graph over commonsense knowledge, visual objects and questions through a Graph Convolutional Network (GCN) following a teacher-student environment. This proposed framework is flexible with any type of teacher and student models without further fine-tuning, and has achieved competitive performances on the ScienceQA dataset. The code is in https://github.com/adlnlp/MCKDVQA.

Cite

Text

Yang et al. "Multimodal Commonsense Knowledge Distillation for Visual Question Answering (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35320

Markdown

[Yang et al. "Multimodal Commonsense Knowledge Distillation for Visual Question Answering (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/yang2025aaai-multimodal/) doi:10.1609/AAAI.V39I28.35320

BibTeX

@inproceedings{yang2025aaai-multimodal,
  title     = {{Multimodal Commonsense Knowledge Distillation for Visual Question Answering (Student Abstract)}},
  author    = {Yang, Shuo and Luo, Siwen and Han, Soyeon Caren},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29545-29547},
  doi       = {10.1609/AAAI.V39I28.35320},
  url       = {https://mlanthology.org/aaai/2025/yang2025aaai-multimodal/}
}