Discriminative Complementary-Label Learning with Weighted Loss

Abstract

Complementary-label learning (CLL) deals with the weak supervision scenario where each training instance is associated with one \emph{complementary} label, which specifies the class label that the instance does \emph{not} belong to. Given the training instance ${\bm x}$, existing CLL approaches aim at modeling the \emph{generative} relationship between the complementary label $\bar y$, i.e. $P(\bar y\mid {\bm x})$, and the ground-truth label $y$, i.e. $P(y\mid {\bm x})$. Nonetheless, as the ground-truth label is not directly accessible for complementarily labeled training instance, strong generative assumptions may not hold for real-world CLL tasks. In this paper, we derive a simple and theoretically-sound \emph{discriminative} model towards $P(\bar y\mid {\bm x})$, which naturally leads to a risk estimator with estimation error bound at $\mathcal{O}(1/\sqrt{n})$ convergence rate. Accordingly, a practical CLL approach is proposed by further introducing weighted loss to the empirical risk to maximize the predictive gap between potential ground-truth label and complementary label. Extensive experiments clearly validate the effectiveness of the proposed discriminative complementary-label learning approach.

Cite

Text

Gao and Zhang. "Discriminative Complementary-Label Learning with Weighted Loss." International Conference on Machine Learning, 2021.

Markdown

[Gao and Zhang. "Discriminative Complementary-Label Learning with Weighted Loss." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/gao2021icml-discriminative/)

BibTeX

@inproceedings{gao2021icml-discriminative,
  title     = {{Discriminative Complementary-Label Learning with Weighted Loss}},
  author    = {Gao, Yi and Zhang, Min-Ling},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {3587-3597},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/gao2021icml-discriminative/}
}