Learning from Multi-Dimensional Partial Labels
Abstract
Multi-dimensional classification has attracted huge attention from the community. Though most studies consider fully annotated data, in real practice obtaining fully labeled data in MDC tasks is usually intractable. In this paper, we propose a novel learning paradigm: MultiDimensional Partial Label Learning (MDPL) where the ground-truth labels of each instance are concealed in multiple candidate label sets. We first introduce the partial hamming loss for MDPL that incurs a large loss if the predicted labels are not in candidate label sets, and provide an empirical risk minimization (ERM) framework. Theoretically, we rigorously prove the conditions for ERM learnability of MDPL in both independent and dependent cases. Furthermore, we present two MDPL algorithms under our proposed ERM framework. Comprehensive experiments on both synthetic and real-world datasets validate the effectiveness of our proposals.
Cite
Text
Wang et al. "Learning from Multi-Dimensional Partial Labels." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/407Markdown
[Wang et al. "Learning from Multi-Dimensional Partial Labels." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/wang2020ijcai-learning/) doi:10.24963/IJCAI.2020/407BibTeX
@inproceedings{wang2020ijcai-learning,
title = {{Learning from Multi-Dimensional Partial Labels}},
author = {Wang, Haobo and Liu, Weiwei and Zhao, Yang and Hu, Tianlei and Chen, Ke and Chen, Gang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {2943-2949},
doi = {10.24963/IJCAI.2020/407},
url = {https://mlanthology.org/ijcai/2020/wang2020ijcai-learning/}
}