Complementary-Label Learning for Arbitrary Losses and Models

Abstract

In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models—all existing methods have failed to achieve this goal. Not only is this beneficial for the learning stage, it also makes model/hyper-parameter selection (through cross-validation) possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient ascent trick, and demonstrate its superiority through experiments.

Cite

Text

Ishida et al. "Complementary-Label Learning for Arbitrary Losses and Models." International Conference on Machine Learning, 2019.

Markdown

[Ishida et al. "Complementary-Label Learning for Arbitrary Losses and Models." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/ishida2019icml-complementarylabel/)

BibTeX

@inproceedings{ishida2019icml-complementarylabel,
  title     = {{Complementary-Label Learning for Arbitrary Losses and Models}},
  author    = {Ishida, Takashi and Niu, Gang and Menon, Aditya and Sugiyama, Masashi},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {2971-2980},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/ishida2019icml-complementarylabel/}
}