Open Vocabulary Scene Parsing
Abstract
Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this problem. Our approach is a joint image pixel and word concept embeddings framework, where word concepts are connected by semantic relations. We validate the open vocabulary prediction ability of our framework on ADE20K dataset which covers a wide variety of scenes and objects. We further explore the trained joint embedding space to show its interpretability.
Cite
Text
Zhao et al. "Open Vocabulary Scene Parsing." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.221Markdown
[Zhao et al. "Open Vocabulary Scene Parsing." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/zhao2017iccv-open/) doi:10.1109/ICCV.2017.221BibTeX
@inproceedings{zhao2017iccv-open,
title = {{Open Vocabulary Scene Parsing}},
author = {Zhao, Hang and Puig, Xavier and Zhou, Bolei and Fidler, Sanja and Torralba, Antonio},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.221},
url = {https://mlanthology.org/iccv/2017/zhao2017iccv-open/}
}