Adapting Attributes by Selecting Features Similar Across Domains

Abstract

Attributes are semantic visual properties shared by objects. They have been shown to improve object recognition and to enhance content-based image search. While attributes are expected to cover multiple categories, e.g. a dalmatian and a whale can both have "smooth skin", we find that the appearance of a single attribute varies quite a bit across categories. Thus, an attribute model learned on one category may not be usable on another category. We show how to adapt attribute models towards new categories. We ensure that positive transfer can occur between a source domain of categories and a novel target domain, by learning in a feature subspace found by feature selection where the data distributions of the domains are similar. We demonstrate that when data from the novel domain is limited, regularizing attribute models for that novel domain with models trained on an auxiliary domain (via Adaptive SVM) improves the accuracy of attribute prediction.

Cite

Text

Liu and Kovashka. "Adapting Attributes by Selecting Features Similar Across Domains." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477731

Markdown

[Liu and Kovashka. "Adapting Attributes by Selecting Features Similar Across Domains." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/liu2016wacv-adapting/) doi:10.1109/WACV.2016.7477731

BibTeX

@inproceedings{liu2016wacv-adapting,
  title     = {{Adapting Attributes by Selecting Features Similar Across Domains}},
  author    = {Liu, Siqi and Kovashka, Adriana},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2016},
  pages     = {1-8},
  doi       = {10.1109/WACV.2016.7477731},
  url       = {https://mlanthology.org/wacv/2016/liu2016wacv-adapting/}
}