Pose Guided Attention for Multi-Label Fashion Image Classification
Abstract
We propose a compact framework with guided attention for multi-label classification in the fashion domain. Our visual semantic attention model (VSAM) is supervised by automatic pose extraction creating a discriminative feature space. VSAM outperforms the state of the art for an in-house dataset and performs on pair with previous works on the DeepFashion dataset, even without using any landmark annotations. Additionally, we show that our semantic attention module brings robustness to large quantities of wrong annotations and provides more interpretable results.
Cite
Text
Ferreira et al. "Pose Guided Attention for Multi-Label Fashion Image Classification." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00380Markdown
[Ferreira et al. "Pose Guided Attention for Multi-Label Fashion Image Classification." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/ferreira2019iccvw-pose/) doi:10.1109/ICCVW.2019.00380BibTeX
@inproceedings{ferreira2019iccvw-pose,
title = {{Pose Guided Attention for Multi-Label Fashion Image Classification}},
author = {Ferreira, Beatriz Quintino and Costeira, João Paulo and Sousa, Ricardo Gamelas and Gui, Liang-Yan and Gomes, João Pedro},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {3125-3128},
doi = {10.1109/ICCVW.2019.00380},
url = {https://mlanthology.org/iccvw/2019/ferreira2019iccvw-pose/}
}