Enhancing Robustness to Class-Conditional Distribution Shift in Long-Tailed Recognition

Abstract

For long-tailed recognition problem, beyond imbalanced label distribution, unreliable empirical data distribution due to instance scarcity has recently emerged as a concern. It inevitably causes Class-Conditional Distribution (CCD) shift between training and test. Data augmentation and head-to-tail information transfer methods indirectly alleviate the problem by synthesizing novel examples but may remain biased. In this paper, we conduct a thorough study on the impact of CCD shift and propose Distributionally Robust Augmentation (DRA) to directly train models robust to the shift. DRA admits a novel generalization bound reflecting the benefit of distributional robustness to CCD shift for long-tailed recognition. Extensive experiments show DRA greatly improves existing re-balancing and data augmentation methods when cooperating with them. It also alleviates the recently discovered saddle-point issue, verifying its ability to achieve enhanced robustness.

Cite

Text

Li et al. "Enhancing Robustness to Class-Conditional Distribution Shift in Long-Tailed Recognition." Transactions on Machine Learning Research, 2024.

Markdown

[Li et al. "Enhancing Robustness to Class-Conditional Distribution Shift in Long-Tailed Recognition." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/li2024tmlr-enhancing/)

BibTeX

@article{li2024tmlr-enhancing,
  title     = {{Enhancing Robustness to Class-Conditional Distribution Shift in Long-Tailed Recognition}},
  author    = {Li, Keliang and Chang, Hong and Shan, Shiguang and Chen, Xilin},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/li2024tmlr-enhancing/}
}