Actively Selecting Annotations Among Objects and Attributes

Abstract

We present an active learning approach to choose image annotation requests among both object category labels and the objects' attribute labels. The goal is to solicit those labels that will best use human effort when training a multi-class object recognition model. In contrast to previous work in active visual category learning, our approach directly exploits the dependencies between human-nameable visual attributes and the objects they describe, shifting its requests in either label space accordingly. We adopt a discriminative latent model that captures object-attribute and attribute-attribute relationships, and then define a suitable entropy reduction selection criterion to predict the influence a new label might have throughout those connections. On three challenging datasets, we demonstrate that the method can more successfully accelerate object learning relative to both passive learning and traditional active learning approaches.

Cite

Text

Kovashka et al. "Actively Selecting Annotations Among Objects and Attributes." IEEE/CVF International Conference on Computer Vision, 2011. doi:10.1109/ICCV.2011.6126395

Markdown

[Kovashka et al. "Actively Selecting Annotations Among Objects and Attributes." IEEE/CVF International Conference on Computer Vision, 2011.](https://mlanthology.org/iccv/2011/kovashka2011iccv-actively/) doi:10.1109/ICCV.2011.6126395

BibTeX

@inproceedings{kovashka2011iccv-actively,
  title     = {{Actively Selecting Annotations Among Objects and Attributes}},
  author    = {Kovashka, Adriana and Vijayanarasimhan, Sudheendra and Grauman, Kristen},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2011},
  pages     = {1403-1410},
  doi       = {10.1109/ICCV.2011.6126395},
  url       = {https://mlanthology.org/iccv/2011/kovashka2011iccv-actively/}
}