Multiple Interaction Learning with Question-Type Prior Knowledge for Constraining Answer Search Space in Visual Question Answering

Abstract

Different approaches have been proposed to Visual Question Answering (VQA). However, few works are aware of the behaviors of varying joint modality methods over question type prior knowledge extracted from data in constraining answer search space, of which information gives a reliable cue to reason about answers for questions asked in input images. In this paper, we propose a novel VQA model that utilizes the question-type prior information to improve VQA by leveraging the multiple interactions between different joint modality methods based on their behaviors in answering questions from different types. The solid experiments on two benchmark datasets, i.e., VQA 2.0 and TDIUC, indicate that the proposed method yields the best performance with the most competitive approaches.

Cite

Text

Do et al. "Multiple Interaction Learning with Question-Type Prior Knowledge for Constraining Answer Search Space in Visual Question Answering." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66096-3_34

Markdown

[Do et al. "Multiple Interaction Learning with Question-Type Prior Knowledge for Constraining Answer Search Space in Visual Question Answering." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/do2020eccvw-multiple/) doi:10.1007/978-3-030-66096-3_34

BibTeX

@inproceedings{do2020eccvw-multiple,
  title     = {{Multiple Interaction Learning with Question-Type Prior Knowledge for Constraining Answer Search Space in Visual Question Answering}},
  author    = {Do, Tuong and Nguyen, Binh X. and Tran, Huy and Tjiputra, Erman and Tran, Quang D. and Do, Thanh-Toan},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2020},
  pages     = {496-510},
  doi       = {10.1007/978-3-030-66096-3_34},
  url       = {https://mlanthology.org/eccvw/2020/do2020eccvw-multiple/}
}