Natural Language Object Retrieval
Abstract
In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.
Cite
Text
Hu et al. "Natural Language Object Retrieval." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.493Markdown
[Hu et al. "Natural Language Object Retrieval." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/hu2016cvpr-natural/) doi:10.1109/CVPR.2016.493BibTeX
@inproceedings{hu2016cvpr-natural,
title = {{Natural Language Object Retrieval}},
author = {Hu, Ronghang and Xu, Huazhe and Rohrbach, Marcus and Feng, Jiashi and Saenko, Kate and Darrell, Trevor},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.493},
url = {https://mlanthology.org/cvpr/2016/hu2016cvpr-natural/}
}