Approximately Correct Label Distribution Learning

Abstract

Label distribution learning (LDL) is a powerful learning paradigm that emulates label polysemy by assigning label distributions over the label space. However, existing LDL evaluation metrics struggle to capture meaningful performance differences due to their insensitivity to subtle distributional changes, and existing LDL learning objectives often exhibit biases by disproportionately emphasizing a small subset of samples with extreme predictions. As a result, the LDL metrics lose their discriminability, and the LDL objectives are also at risk of overfitting. In this paper, we propose DeltaLDL, a percentage of predictions that are approximately correct within the context of LDL, as a solution to the above problems. DeltaLDL can serve as a novel evaluation metric, which is parameter-free and reflects more on real performance improvements. DeltaLDL can also serve as a novel learning objective, which is differentiable and encourages most samples to be predicted as approximately correct, thereby mitigating overfitting. Our theoretical analysis and empirical results demonstrate the effectiveness of the proposed solution.

Cite

Text

Li et al. "Approximately Correct Label Distribution Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Li et al. "Approximately Correct Label Distribution Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/li2025icml-approximately/)

BibTeX

@inproceedings{li2025icml-approximately,
  title     = {{Approximately Correct Label Distribution Learning}},
  author    = {Li, Weiwei and Wu, Haitao and Lu, Yunan and Jia, Xiuyi},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {36298-36309},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/li2025icml-approximately/}
}