Learning Attribute and Class-Specific Representation Duet for Fine-Grained Fashion Analysis

Abstract

Fashion representation learning involves the analysis and understanding of various visual elements at different granularities and the interactions among them. Existing works often learn fine-grained fashion representations at the attribute-level without considering their relationships and inter-dependencies across different classes. In this work, we propose to learn an attribute and class specific fashion representation duet to better model such attribute relationships and inter-dependencies by leveraging prior knowledge about the taxonomy of fashion attributes and classes. Through two sub-networks for the attributes and classes, respectively, our proposed an embedding network progressively learn and refine the visual representation of a fashion image to improve its robustness for fashion retrieval. A multi-granularity loss consisting of attribute-level and class-level losses is proposed to introduce appropriate inductive bias to learn across different granularities of the fashion representations. Experimental results on three benchmark datasets demonstrate the effectiveness of our method, which outperforms the state-of-the-art methods with a large margin.

Cite

Text

Jiao et al. "Learning Attribute and Class-Specific Representation Duet for Fine-Grained Fashion Analysis." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01063

Markdown

[Jiao et al. "Learning Attribute and Class-Specific Representation Duet for Fine-Grained Fashion Analysis." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/jiao2023cvpr-learning/) doi:10.1109/CVPR52729.2023.01063

BibTeX

@inproceedings{jiao2023cvpr-learning,
  title     = {{Learning Attribute and Class-Specific Representation Duet for Fine-Grained Fashion Analysis}},
  author    = {Jiao, Yang and Gao, Yan and Meng, Jingjing and Shang, Jin and Sun, Yi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {11050-11059},
  doi       = {10.1109/CVPR52729.2023.01063},
  url       = {https://mlanthology.org/cvpr/2023/jiao2023cvpr-learning/}
}