Online Partial Label Learning

Abstract

A common assumption in online learning is that the data examples are precisely labeled. Unfortunately, it is intractable to obtain noise-free data in many real-world applications, and the datasets are usually adulterated with some irrelevant labels. To alleviate this problem, we propose a novel learning paradigm called Online Partial Label Learning (OPLL), where each data example is associated with multiple candidate labels. To learn from sequentially arrived data given partial knowledge of the correct answer, we propose three effective maximum margin-based algorithms. Theoretically, we derive the regret bounds for our proposed algorithms which guarantee their performance on unseen data. Extensive experiments on various synthetic UCI datasets and six real-world datasets validate the effectiveness of our proposed approaches.

Cite

Text

Wang et al. "Online Partial Label Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020. doi:10.1007/978-3-030-67661-2_27

Markdown

[Wang et al. "Online Partial Label Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020.](https://mlanthology.org/ecmlpkdd/2020/wang2020ecmlpkdd-online/) doi:10.1007/978-3-030-67661-2_27

BibTeX

@inproceedings{wang2020ecmlpkdd-online,
  title     = {{Online Partial Label Learning}},
  author    = {Wang, Haobo and Qiang, Yuzhou and Chen, Chen and Liu, Weiwei and Hu, Tianlei and Li, Zhao and Chen, Gang},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2020},
  pages     = {455-470},
  doi       = {10.1007/978-3-030-67661-2_27},
  url       = {https://mlanthology.org/ecmlpkdd/2020/wang2020ecmlpkdd-online/}
}