CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
Abstract
When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short- comings. Existing benchmarks for visual question answer- ing can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
Cite
Text
Johnson et al. "CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.215Markdown
[Johnson et al. "CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/johnson2017cvpr-clevr/) doi:10.1109/CVPR.2017.215BibTeX
@inproceedings{johnson2017cvpr-clevr,
title = {{CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning}},
author = {Johnson, Justin and Hariharan, Bharath and van der Maaten, Laurens and Fei-Fei, Li and Zitnick, C. Lawrence and Girshick, Ross},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.215},
url = {https://mlanthology.org/cvpr/2017/johnson2017cvpr-clevr/}
}