SomethingFinder: Localizing Undefined Regions Using Referring Expressions

Abstract

Previous research on localizing a target region in an image referred to by a natural language expression has occurred within an object-centric paradigm. However, in practice, there may not be any easily named or identifiable objects near a target location. Instead, references may need to rely on basic visual attributes, such as color or geometric clues. An expression like "a red something beside a blue vertical line" could still pinpoint a target location. As such, we begin to explore the open challenge of computational object-agnostic reference by constructing a novel dataset and by devising a new set of algorithms that can identify a target region in an image when given a referring expression containing only basic conceptual features.

Cite

Text

Eum et al. "SomethingFinder: Localizing Undefined Regions Using Referring Expressions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00198

Markdown

[Eum et al. "SomethingFinder: Localizing Undefined Regions Using Referring Expressions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/eum2020cvprw-somethingfinder/) doi:10.1109/CVPRW50498.2020.00198

BibTeX

@inproceedings{eum2020cvprw-somethingfinder,
  title     = {{SomethingFinder: Localizing Undefined Regions Using Referring Expressions}},
  author    = {Eum, Sungmin and Han, David K. and Briggs, Gordon},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {1551-1554},
  doi       = {10.1109/CVPRW50498.2020.00198},
  url       = {https://mlanthology.org/cvprw/2020/eum2020cvprw-somethingfinder/}
}