Difficulty-Aware Balancing Margin Loss for Long-Tailed Recognition

Abstract

When trained with severely imbalanced data, deep neural networks often struggle to accurately recognize classes with few samples. Previous studies in long-tailed recognition have attempted to rebalance biased learning using known sample distributions, primarily addressing different classification difficulties at the class level. However, these approaches often overlook the instance difficulty variation within each class. In this paper, we propose a difficulty-aware balancing margin (DBM) loss, which considers both class imbalance and instance difficulty. DBM loss comprises two components: a class-wise margin to mitigate learning bias caused by imbalanced class frequencies, and an instance-wise margin assigned to hard positive samples based on their individual difficulty. DBM loss improves class discriminativity by assigning larger margins to more difficult samples. Our method effortlessly combine with existing approaches and consistently improves performance across various long-tailed recognition benchmarks.

Cite

Text

Son et al. "Difficulty-Aware Balancing Margin Loss for Long-Tailed Recognition." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34261

Markdown

[Son et al. "Difficulty-Aware Balancing Margin Loss for Long-Tailed Recognition." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/son2025aaai-difficulty/) doi:10.1609/AAAI.V39I19.34261

BibTeX

@inproceedings{son2025aaai-difficulty,
  title     = {{Difficulty-Aware Balancing Margin Loss for Long-Tailed Recognition}},
  author    = {Son, Minseok and Koo, Inyong and Park, Jinyoung and Kim, Changick},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {20522-20530},
  doi       = {10.1609/AAAI.V39I19.34261},
  url       = {https://mlanthology.org/aaai/2025/son2025aaai-difficulty/}
}