Predicting Multiple Attributes via Relative Multi-Task Learning

Abstract

Relative attributes learning aims to learn ranking functions describing the relative strength of attributes. Most of current learning approaches learn ranking functions for each attribute independently without considering possible intrinsic relatedness among the attributes. For a problem involving multiple attributes, it is reasonable to assume that utilizing such relatedness among the attributes would benefit learning, especially when the number of labeled training pairs are very limited. In this paper, we proposed a relative multi-attribute learning framework that integrates relative attributes into a multi-task learning scheme. The formulation allows us to exploit the advantages of the state-of-the-art regularization-based multi-task learning for improved attribute learning. In particular, using joint feature learning as the case studies, we evaluated our framework with both synthetic data and two real datasets. Experimental results suggest that the proposed framework has clear performance gain in ranking accuracy and zero-shot learning accuracy over existing methods of independent relative attributes learning and multi-task learning.

Cite

Text

Chen et al. "Predicting Multiple Attributes via Relative Multi-Task Learning." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.135

Markdown

[Chen et al. "Predicting Multiple Attributes via Relative Multi-Task Learning." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/chen2014cvpr-predicting/) doi:10.1109/CVPR.2014.135

BibTeX

@inproceedings{chen2014cvpr-predicting,
  title     = {{Predicting Multiple Attributes via Relative Multi-Task Learning}},
  author    = {Chen, Lin and Zhang, Qiang and Li, Baoxin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.135},
  url       = {https://mlanthology.org/cvpr/2014/chen2014cvpr-predicting/}
}