A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning
Abstract
We address the problem of distance metric learning in visual similarity search, defined as learning an image embedding model which projects images into Euclidean space where semantically and visually similar images are closer and dissimilar images are further from one another. We present a weakly supervised adaptive triplet loss (ATL) capable of capturing fine-grained semantic similarity that encourages the learned image embedding models to generalize well on cross-domain data. The method uses weakly labeled product description data to implicitly determine fine grained semantic classes, avoiding the need to annotate large amounts of training data. We evaluate on the Amazon fashion retrieval benchmark and DeepFashion in-shop retrieval data. The method boosts the performance of triplet loss baseline by 10.6% on cross-domain data and out-performs the state-of-art model on all evaluation metrics.
Cite
Text
Zhao et al. "A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00393Markdown
[Zhao et al. "A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/zhao2019iccvw-weakly/) doi:10.1109/ICCVW.2019.00393BibTeX
@inproceedings{zhao2019iccvw-weakly,
title = {{A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning}},
author = {Zhao, Xiaonan and Qi, Huan and Luo, Rui and Davis, Larry},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {3177-3180},
doi = {10.1109/ICCVW.2019.00393},
url = {https://mlanthology.org/iccvw/2019/zhao2019iccvw-weakly/}
}