Discovering the Spatial Extent of Relative Attributes

Abstract

We present a weakly-supervised approach that discovers the spatial extent of relative attributes, given only pairs of ordered images. In contrast to traditional approaches that use global appearance features or rely on keypoint detectors, our goal is to automatically discover the image regions that are relevant to the attribute, even when the attribute's appearance changes drastically across its attribute spectrum. To accomplish this, we first develop a novel formulation that combines a detector with local smoothness to discover a set of coherent visual chains across the image collection. We then introduce an efficient way to generate additional chains anchored on the initial discovered ones. Finally, we automatically identify the most relevant visual chains, and create an ensemble image representation to model the attribute. Through extensive experiments, we demonstrate our method's promise relative to several baselines in modeling relative attributes.

Cite

Text

Xiao and Lee. "Discovering the Spatial Extent of Relative Attributes." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.171

Markdown

[Xiao and Lee. "Discovering the Spatial Extent of Relative Attributes." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/xiao2015iccv-discovering/) doi:10.1109/ICCV.2015.171

BibTeX

@inproceedings{xiao2015iccv-discovering,
  title     = {{Discovering the Spatial Extent of Relative Attributes}},
  author    = {Xiao, Fanyi and Lee, Yong Jae},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.171},
  url       = {https://mlanthology.org/iccv/2015/xiao2015iccv-discovering/}
}