Learning to Disambiguate by Asking Discriminative Questions

Abstract

The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.

Cite

Text

Li et al. "Learning to Disambiguate by Asking Discriminative Questions." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.370

Markdown

[Li et al. "Learning to Disambiguate by Asking Discriminative Questions." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/li2017iccv-learning-a/) doi:10.1109/ICCV.2017.370

BibTeX

@inproceedings{li2017iccv-learning-a,
  title     = {{Learning to Disambiguate by Asking Discriminative Questions}},
  author    = {Li, Yining and Huang, Chen and Tang, Xiaoou and Loy, Chen Change},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.370},
  url       = {https://mlanthology.org/iccv/2017/li2017iccv-learning-a/}
}