Margin Calibration for Long-Tailed Visual Recognition

Abstract

Long-tailed visual recognition tasks pose great challenges for neural networks on how to handle the imbalanced predictions between head (common) and tail (rare) classes, i.e., models tend to classify tail classes as head classes. While existing research focused on data resampling and loss function engineering, in this paper, we take a different perspective: the classification margins. We study the relationship between the margins and logits and empirically observe that the uncalibrated margins and logits are positively correlated. We propose a simple yet effective MARgin Calibration approach (MARC) to calibrate the margins to obtain better logits. We validate MARC through extensive experiments on common long-tailed benchmarks including CIFAR-LT, ImageNet-LT, Places-LT, and iNaturalist-LT. Experimental results demonstrate that our MARC achieves favorable results on these benchmarks. In addition, MARC is extremely easy to implement with just three lines of code. We hope this simple approach will motivate people to rethink the uncalibrated margins and logits in long-tailed visual recognition.

Cite

Text

Wang et al. "Margin Calibration for Long-Tailed Visual Recognition." Proceedings of The 14th Asian Conference on Machine Learning, 2022.

Markdown

[Wang et al. "Margin Calibration for Long-Tailed Visual Recognition." Proceedings of The 14th Asian Conference on Machine Learning, 2022.](https://mlanthology.org/acml/2022/wang2022acml-margin/)

BibTeX

@inproceedings{wang2022acml-margin,
  title     = {{Margin Calibration for Long-Tailed Visual Recognition}},
  author    = {Wang, Yidong and Zhang, Bowen and Hou, Wenxin and Wu, Zhen and Wang, Jindong and Shinozaki, Takahiro},
  booktitle = {Proceedings of The 14th Asian Conference on Machine Learning},
  year      = {2022},
  pages     = {1101-1116},
  volume    = {189},
  url       = {https://mlanthology.org/acml/2022/wang2022acml-margin/}
}