Relative Attribute Learning with Deep Attentive Cross-Image Representation
Abstract
In this paper, we study the relative attribute learning problem, which refers to comparing the strengths of a specific attribute between image pairs, with a new perspective of cross-image representation learning. In particular, we introduce a deep attentive cross-image representation learning (DACRL) model, which first extracts single-image representation with one shared subnetwork, and then learns attentive cross-image representation through considering the channel-wise attention of concatenated single-image feature maps. Taking a pair of images as input, DACRL outputs a posterior probability indicating whether the first image in the pair has a stronger presence of attribute than the second image. The whole network is jointly optimized via a unified end-to-end deep learning scheme. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art methods.
Cite
Text
Zhang et al. "Relative Attribute Learning with Deep Attentive Cross-Image Representation." Proceedings of The 10th Asian Conference on Machine Learning, 2018.Markdown
[Zhang et al. "Relative Attribute Learning with Deep Attentive Cross-Image Representation." Proceedings of The 10th Asian Conference on Machine Learning, 2018.](https://mlanthology.org/acml/2018/zhang2018acml-relative/)BibTeX
@inproceedings{zhang2018acml-relative,
title = {{Relative Attribute Learning with Deep Attentive Cross-Image Representation}},
author = {Zhang, Zeshang and Li, Yingming and Zhang, Zhongfei},
booktitle = {Proceedings of The 10th Asian Conference on Machine Learning},
year = {2018},
pages = {879-892},
volume = {95},
url = {https://mlanthology.org/acml/2018/zhang2018acml-relative/}
}