Knowledge Acquisition for Visual Question Answering via Iterative Querying
Abstract
Humans possess an extraordinary ability to learn new skills and new knowledge for problem solving. Such learning ability is also required by an automatic model to deal with arbitrary, open-ended questions in the visual world. We propose a neural-based approach to acquiring task- driven information for visual question answering (VQA). Our model proposes queries to actively acquire relevant information from external auxiliary data. Supporting evidence from either human-curated or automatic sources is encoded and stored into a memory bank. We show that acquiring task-driven evidence effectively improves model performance on both the Visual7W and VQA datasets; moreover, these queries offer certain level of interpretability in our iterative QA model.
Cite
Text
Zhu et al. "Knowledge Acquisition for Visual Question Answering via Iterative Querying." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.651Markdown
[Zhu et al. "Knowledge Acquisition for Visual Question Answering via Iterative Querying." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/zhu2017cvpr-knowledge/) doi:10.1109/CVPR.2017.651BibTeX
@inproceedings{zhu2017cvpr-knowledge,
title = {{Knowledge Acquisition for Visual Question Answering via Iterative Querying}},
author = {Zhu, Yuke and Lim, Joseph J. and Fei-Fei, Li},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.651},
url = {https://mlanthology.org/cvpr/2017/zhu2017cvpr-knowledge/}
}