Partial Multi-Label Learning with Probabilistic Graphical Disambiguation
Abstract
In partial multi-label learning (PML), each training example is associated with a set of candidate labels, among which only some labels are valid. As a common strategy to tackle PML problem, disambiguation aims to recover the ground-truth labeling information from such inaccurate annotations. However, existing approaches mainly rely on heuristics or ad-hoc rules to disambiguate candidate labels, which may not be universal enough in complicated real-world scenarios. To provide a principled way for disambiguation, we make a first attempt to explore the probabilistic graphical model for PML problem, where a directed graph is tailored to infer latent ground-truth labeling information from the generative process of partial multi-label data. Under the framework of stochastic gradient variational Bayes, a unified variational lower bound is derived for this graphical model, which is further relaxed probabilistically so that the desired prediction model can be induced with simultaneously identified ground-truth labeling information. Comprehensive experiments on multiple synthetic and real-world data sets show that our approach outperforms the state-of-the-art counterparts.
Cite
Text
Hang and Zhang. "Partial Multi-Label Learning with Probabilistic Graphical Disambiguation." Neural Information Processing Systems, 2023.Markdown
[Hang and Zhang. "Partial Multi-Label Learning with Probabilistic Graphical Disambiguation." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/hang2023neurips-partial/)BibTeX
@inproceedings{hang2023neurips-partial,
title = {{Partial Multi-Label Learning with Probabilistic Graphical Disambiguation}},
author = {Hang, Jun-Yi and Zhang, Min-Ling},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/hang2023neurips-partial/}
}