Long-Tail Learning via Logit Adjustment
Abstract
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels have only a few associated samples. This poses a challenge for generalisation on such labels, and also makes naive learning biased towards dominant labels. In this paper, we present a statistical framework that unifies and generalises several recent proposals to cope with these challenges. Our framework revisits the classic idea of logit adjustment based on the label frequencies, which encourages a large relative margin between logits of rare positive versus dominant negative labels. This yields two techniques for long-tail learning, where such adjustment is either applied post-hoc to a trained model, or enforced in the loss during training. These techniques are statistically grounded, and practically effective on four real-world datasets with long-tailed label distributions.
Cite
Text
Menon et al. "Long-Tail Learning via Logit Adjustment." International Conference on Learning Representations, 2021.Markdown
[Menon et al. "Long-Tail Learning via Logit Adjustment." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/menon2021iclr-longtail/)BibTeX
@inproceedings{menon2021iclr-longtail,
title = {{Long-Tail Learning via Logit Adjustment}},
author = {Menon, Aditya Krishna and Jayasumana, Sadeep and Rawat, Ankit Singh and Jain, Himanshu and Veit, Andreas and Kumar, Sanjiv},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/menon2021iclr-longtail/}
}