BIER - Boosting Independent Embeddings Robustly

Abstract

Learning similarity functions between image pairs with deep neural networks yields highly correlated activations of large embeddings. In this work, we show how to improve the robustness of embeddings by exploiting independence in ensembles. We divide the last embedding layer of a deep network into an embedding ensemble and formulate training this ensemble as an online gradient boosting problem. Each learner receives a reweighted training sample from the previous learners. This leverages large embedding sizes more effectively by significantly reducing correlation of the embedding and consequently increases retrieval accuracy of the embedding. Our method does not introduce any additional parameters and works with any differentiable loss function. We evaluate our metric learning method on image retrieval tasks and show that it improves over state-of-the-art methods on the CUB-200-2011, Cars-196, Stanford Online Products, In-Shop Clothes Retrieval and VehicleID datasets by a significant margin.

Cite

Text

Opitz et al. "BIER - Boosting Independent Embeddings Robustly." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.555

Markdown

[Opitz et al. "BIER - Boosting Independent Embeddings Robustly." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/opitz2017iccv-bier/) doi:10.1109/ICCV.2017.555

BibTeX

@inproceedings{opitz2017iccv-bier,
  title     = {{BIER - Boosting Independent Embeddings Robustly}},
  author    = {Opitz, Michael and Waltner, Georg and Possegger, Horst and Bischof, Horst},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.555},
  url       = {https://mlanthology.org/iccv/2017/opitz2017iccv-bier/}
}