Positive-Unlabeled Learning with Non-Negative Risk Estimator

Abstract

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.

Cite

Text

Kiryo et al. "Positive-Unlabeled Learning with Non-Negative Risk Estimator." Neural Information Processing Systems, 2017.

Markdown

[Kiryo et al. "Positive-Unlabeled Learning with Non-Negative Risk Estimator." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/kiryo2017neurips-positiveunlabeled/)

BibTeX

@inproceedings{kiryo2017neurips-positiveunlabeled,
  title     = {{Positive-Unlabeled Learning with Non-Negative Risk Estimator}},
  author    = {Kiryo, Ryuichi and Niu, Gang and du Plessis, Marthinus C and Sugiyama, Masashi},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {1675-1685},
  url       = {https://mlanthology.org/neurips/2017/kiryo2017neurips-positiveunlabeled/}
}