Answer-Type Prediction for Visual Question Answering
Abstract
Recently, algorithms for object recognition and related tasks have become sufficiently proficient that new vision tasks can now be pursued. In this paper, we build a system capable of answering open-ended text-based questions about images, which is known as Visual Question Answering (VQA). Our approach's key insight is that we can predict the form of the answer from the question. We formulate our solution in a Bayesian framework. When our approach is combined with a discriminative model, the combined model achieves state-of-the-art results on four benchmark datasets for open-ended VQA: DAQUAR, COCO-QA, The VQA Dataset, and Visual7W.
Cite
Text
Kafle and Kanan. "Answer-Type Prediction for Visual Question Answering." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.538Markdown
[Kafle and Kanan. "Answer-Type Prediction for Visual Question Answering." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/kafle2016cvpr-answertype/) doi:10.1109/CVPR.2016.538BibTeX
@inproceedings{kafle2016cvpr-answertype,
title = {{Answer-Type Prediction for Visual Question Answering}},
author = {Kafle, Kushal and Kanan, Christopher},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.538},
url = {https://mlanthology.org/cvpr/2016/kafle2016cvpr-answertype/}
}