Multi-Modal Embedding for Main Product Detection in Fashion

Abstract

We present an approach to detect the main product in fashion images by exploiting the textual metadata associated with each image. Our approach is based on a Convolutional Neural Network and learns a joint embedding of object proposals and textual metadata to predict the main product in the image. We additionally use several complementary classification and overlap losses in order to improve training stability and performance. Our tests on a large-scale dataset taken from eight e-commerce sites show that our approach outperforms strong baselines and is able to accurately detect the main product in a wide diversity of challenging fashion images.

Cite

Text

Yu et al. "Multi-Modal Embedding for Main Product Detection in Fashion." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.261

Markdown

[Yu et al. "Multi-Modal Embedding for Main Product Detection in Fashion." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/yu2017iccvw-multimodal/) doi:10.1109/ICCVW.2017.261

BibTeX

@inproceedings{yu2017iccvw-multimodal,
  title     = {{Multi-Modal Embedding for Main Product Detection in Fashion}},
  author    = {Yu, LongLong and Simo-Serra, Edgar and Moreno-Noguer, Francesc and Rubio, Antonio},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {2236-2242},
  doi       = {10.1109/ICCVW.2017.261},
  url       = {https://mlanthology.org/iccvw/2017/yu2017iccvw-multimodal/}
}