Decoupled Imbalanced Label Distribution Learning
Abstract
Label Distribution Learning (LDL) has been successfully implemented in numerous practical applications. However, the imbalance in label distributions presents a significant challenge due to the substantial variation in annotation information. To tackle this issue, we introduce Decoupled Imbalance Label Distribution Learning (DILDL), which decomposes the imbalanced label distribution into a dominant label distribution and a non-dominant label distribution. Our empirical findings reveal that an excessively high description degree of dominant labels can result in substantial gradient information attenuation for non-dominant labels during the learning process. Therefore, we employ the decoupling approach to balance the description degrees of both dominant and non-dominant labels independently. Furthermore, we align the feature representations with the representations of dominant and non-dominant labels separately, aiming to effectively mitigate the distribution shift problem. Experimental results demonstrate that our proposed DILDL outperforms other state-of-the-art methods for imbalance label distribution learning.
Cite
Text
Gao et al. "Decoupled Imbalanced Label Distribution Learning." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/579Markdown
[Gao et al. "Decoupled Imbalanced Label Distribution Learning." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/gao2025ijcai-decoupled/) doi:10.24963/IJCAI.2025/579BibTeX
@inproceedings{gao2025ijcai-decoupled,
title = {{Decoupled Imbalanced Label Distribution Learning}},
author = {Gao, Yongbiao and Sun, Xiangcheng and Ling, Miaogen and Tan, Chao and Zhai, Yi and Lv, Guohua},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {5199-5207},
doi = {10.24963/IJCAI.2025/579},
url = {https://mlanthology.org/ijcai/2025/gao2025ijcai-decoupled/}
}