Leveraging Weakly Annotated Data for Fashion Image Retrieval and Label Prediction

Abstract

In this paper, we present a method to learn a visual representation adapted for e-commerce products. Based on weakly supervised learning, our model learns from noisy datasets crawled on e-commerce website catalogs and does not require any manual labeling. We show that our representation can be used for downward classification tasks over clothing categories with different levels of granularity. We also demonstrate that the learnt representation is suitable for image retrieval. We achieve nearly state-of-art results on the DeepFashion In-Shop Clothes Retrieval and Categories Attributes Prediction [12] tasks, without using the provided training set.

Cite

Text

Corbière et al. "Leveraging Weakly Annotated Data for Fashion Image Retrieval and Label Prediction." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.266

Markdown

[Corbière et al. "Leveraging Weakly Annotated Data for Fashion Image Retrieval and Label Prediction." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/corbiere2017iccvw-leveraging/) doi:10.1109/ICCVW.2017.266

BibTeX

@inproceedings{corbiere2017iccvw-leveraging,
  title     = {{Leveraging Weakly Annotated Data for Fashion Image Retrieval and Label Prediction}},
  author    = {Corbière, Charles and Ben-Younes, Hédi and Ramé, Alexandre and Ollion, Charles},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {2268-2274},
  doi       = {10.1109/ICCVW.2017.266},
  url       = {https://mlanthology.org/iccvw/2017/corbiere2017iccvw-leveraging/}
}