Generalization in Metric Learning: Should the Embedding Layer Be Embedding Layer?

Abstract

This work studies deep metric learning under small to medium scale as we believe that better generalization could be a contributing factor to the improvement of previous fine-grained image retrieval methods; it should be considered when designing future techniques. In particular, we investigate using other layers in a deep metric learning system (besides the embedding layer) for feature extraction and analyze how well they perform on training data and generalize to testing data. From this study, we suggest a new regularization practice where one can add or choose a more optimal layer for feature extraction. State-of-the-art performance is demonstrated on 3 fine-grained image retrieval benchmarks: Cars-196, CUB-200-2011, and Stanford Online Product.

Cite

Text

Vo and Hays. "Generalization in Metric Learning: Should the Embedding Layer Be Embedding Layer?." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019. doi:10.1109/WACV.2019.00068

Markdown

[Vo and Hays. "Generalization in Metric Learning: Should the Embedding Layer Be Embedding Layer?." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019.](https://mlanthology.org/wacv/2019/vo2019wacv-generalization/) doi:10.1109/WACV.2019.00068

BibTeX

@inproceedings{vo2019wacv-generalization,
  title     = {{Generalization in Metric Learning: Should the Embedding Layer Be Embedding Layer?}},
  author    = {Vo, Nam and Hays, James},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2019},
  pages     = {589-598},
  doi       = {10.1109/WACV.2019.00068},
  url       = {https://mlanthology.org/wacv/2019/vo2019wacv-generalization/}
}