A Joint Learning Framework for Attribute Models and Object Descriptions
Abstract
We present a new approach to learning attribute-based descriptions of objects. Unlike earlier works, we do not assume that the descriptions are hand-labeled. Instead, our approach jointly learns both the attribute classifiers and the descriptions from data. By incorporating class information into the attribute classifier learning, we get an attribute-level representation that generalizes well to both unseen examples of known classes and unseen classes. We consider two different settings, one with unlabeled images available for learning, and another without. The former corresponds to a novel transductive setting where the unlabeled images can come from new classes. Results from Animals with Attributes and a-Yahoo, a-Pascal benchmark datasets show that the learned representations give similar or even better accuracy than the hand-labeled descriptions.
Cite
Text
Mahajan et al. "A Joint Learning Framework for Attribute Models and Object Descriptions." IEEE/CVF International Conference on Computer Vision, 2011. doi:10.1109/ICCV.2011.6126373Markdown
[Mahajan et al. "A Joint Learning Framework for Attribute Models and Object Descriptions." IEEE/CVF International Conference on Computer Vision, 2011.](https://mlanthology.org/iccv/2011/mahajan2011iccv-joint/) doi:10.1109/ICCV.2011.6126373BibTeX
@inproceedings{mahajan2011iccv-joint,
title = {{A Joint Learning Framework for Attribute Models and Object Descriptions}},
author = {Mahajan, Dhruv Kumar and Sellamanickam, Sundararajan and Nair, Vinod},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2011},
pages = {1227-1234},
doi = {10.1109/ICCV.2011.6126373},
url = {https://mlanthology.org/iccv/2011/mahajan2011iccv-joint/}
}