VQA: Visual Question Answering
Abstract
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance.
Cite
Text
Antol et al. "VQA: Visual Question Answering." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.279Markdown
[Antol et al. "VQA: Visual Question Answering." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/antol2015iccv-vqa/) doi:10.1109/ICCV.2015.279BibTeX
@inproceedings{antol2015iccv-vqa,
title = {{VQA: Visual Question Answering}},
author = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.279},
url = {https://mlanthology.org/iccv/2015/antol2015iccv-vqa/}
}