Interpretable and Globally Optimal Prediction for Textual Grounding Using Image Concepts
Abstract
Textual grounding is an important but challenging task for human-computer inter- action, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals obtained from deep net based systems. In this work, we demonstrate that we can cast the problem of textual grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn’t rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word-embeddings which capture spatial-image relationships and provide interpretability. Lastly, at the time of submission, our approach outperformed the current state-of-the-art methods on the Flickr 30k Entities and the ReferItGame dataset by 3.08% and 7.77% respectively.
Cite
Text
Yeh et al. "Interpretable and Globally Optimal Prediction for Textual Grounding Using Image Concepts." Neural Information Processing Systems, 2017.Markdown
[Yeh et al. "Interpretable and Globally Optimal Prediction for Textual Grounding Using Image Concepts." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/yeh2017neurips-interpretable/)BibTeX
@inproceedings{yeh2017neurips-interpretable,
title = {{Interpretable and Globally Optimal Prediction for Textual Grounding Using Image Concepts}},
author = {Yeh, Raymond and Xiong, Jinjun and Hwu, Wen-Mei and Do, Minh and Schwing, Alexander},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {1912-1922},
url = {https://mlanthology.org/neurips/2017/yeh2017neurips-interpretable/}
}