Boosting Standard Classification Architectures Through a Ranking Regularizer

Abstract

We employ triplet loss as a feature embedding regularizer to boost classification performance. Standard architectures, like ResNet and Inception, are extended to support both losses with minimal hyper-parameter tuning. This promotes generality while fine-tuning pretrained networks. Triplet loss is a powerful surrogate for recently proposed embedding regularizers. Yet, it is avoided due to large batch-size requirement and high computational cost. Through our experiments, we re-assess these assumptions. During inference, our network supports both classification and embedding tasks without any computational overhead. Quantitative evaluation highlights a steady improvement on five fine-grained recognition datasets. Further evaluation on an imbalanced video dataset achieves significant improvement. Triplet loss brings feature embedding capabilities like nearest neighbor to classification models. Code available at http://bit.ly/2LNYEqL

Cite

Text

Taha et al. "Boosting Standard Classification Architectures Through a Ranking Regularizer." Winter Conference on Applications of Computer Vision, 2020.

Markdown

[Taha et al. "Boosting Standard Classification Architectures Through a Ranking Regularizer." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/taha2020wacv-boosting/)

BibTeX

@inproceedings{taha2020wacv-boosting,
  title     = {{Boosting Standard Classification Architectures Through a Ranking Regularizer}},
  author    = {Taha, Ahmed and Chen, Yi-Ting and Misu, Teruhisa and Shrivastava, Abhinav and Davis, Larry},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2020},
  url       = {https://mlanthology.org/wacv/2020/taha2020wacv-boosting/}
}