Theoretical Analysis of Label Distribution Learning

Abstract

As a novel learning paradigm, label distribution learning (LDL) explicitly models label ambiguity with the definition of label description degree. Although lots of work has been done to deal with real-world applications, theoretical results on LDL remain unexplored. In this paper, we rethink LDL from theoretical aspects, towards analyzing learnability of LDL. Firstly, risk bounds for three representative LDL algorithms (AA-kNN, AA-BP and SA-ME) are provided. For AA-kNN, Lipschitzness of the label distribution function is assumed to bound the risk, and for AA-BP and SA-ME, rademacher complexity is utilized to give data-dependent risk bounds. Secondly, a generalized plug-in decision theorem is proposed to understand the relation between LDL and classification, uncovering that approximation to the conditional probability distribution function in absolute loss guarantees approaching to the optimal classifier, and also data-dependent error probability bounds are presented for the corresponding LDL algorithms to perform classification. As far as we know, this is perhaps the first research on theory of LDL.

Cite

Text

Wang and Geng. "Theoretical Analysis of Label Distribution Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015256

Markdown

[Wang and Geng. "Theoretical Analysis of Label Distribution Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/wang2019aaai-theoretical/) doi:10.1609/AAAI.V33I01.33015256

BibTeX

@inproceedings{wang2019aaai-theoretical,
  title     = {{Theoretical Analysis of Label Distribution Learning}},
  author    = {Wang, Jing and Geng, Xin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5256-5263},
  doi       = {10.1609/AAAI.V33I01.33015256},
  url       = {https://mlanthology.org/aaai/2019/wang2019aaai-theoretical/}
}