Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items
Abstract
Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.
Cite
Text
Yamaguchi et al. "Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.437Markdown
[Yamaguchi et al. "Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/yamaguchi2013iccv-paper/) doi:10.1109/ICCV.2013.437BibTeX
@inproceedings{yamaguchi2013iccv-paper,
title = {{Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items}},
author = {Yamaguchi, Kota and Kiapour, M. Hadi and Berg, Tamara L.},
booktitle = {International Conference on Computer Vision},
year = {2013},
doi = {10.1109/ICCV.2013.437},
url = {https://mlanthology.org/iccv/2013/yamaguchi2013iccv-paper/}
}