On the Complexity of Learning a Class Ratio from Unlabeled Data

Abstract

In the problem of learning a class ratio from unlabeled data, which we call CR learning, the training data is unlabeled, and only the ratios, or proportions, of examples receiving each label are given. The goal is to learn a hypothesis that predicts the proportions of labels on the distribution underlying the sample. This model of learning is applicable to a wide variety of settings, including predicting the number of votes for candidates in political elections from polls. In this paper, we formally define this class and resolve foundational questions regarding the computational complexity of CR learning and characterize its relationship to PAC learning. Among our results, we show, perhaps surprisingly, that for finite VC classes what can be efficiently CR learned is a strict subset of what can be learned efficiently in PAC, under standard complexity assumptions. We also show that there exist classes of functions whose CR learnability is independent of ZFC, the standard set theoretic axioms. This implies that CR learning cannot be easily characterized (like PAC by VC dimension).

Cite

Text

Fish and Reyzin. "On the Complexity of Learning a Class Ratio from Unlabeled Data." Journal of Artificial Intelligence Research, 2020. doi:10.1613/JAIR.1.12013

Markdown

[Fish and Reyzin. "On the Complexity of Learning a Class Ratio from Unlabeled Data." Journal of Artificial Intelligence Research, 2020.](https://mlanthology.org/jair/2020/fish2020jair-complexity/) doi:10.1613/JAIR.1.12013

BibTeX

@article{fish2020jair-complexity,
  title     = {{On the Complexity of Learning a Class Ratio from Unlabeled Data}},
  author    = {Fish, Benjamin and Reyzin, Lev},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2020},
  pages     = {1333-1349},
  doi       = {10.1613/JAIR.1.12013},
  volume    = {69},
  url       = {https://mlanthology.org/jair/2020/fish2020jair-complexity/}
}