PAC Generalization Bounds for Co-Training

Abstract

The rule-based bootstrapping introduced by Yarowsky, and its co- training variant by Blum and Mitchell, have met with considerable em- pirical success. Earlier work on the theory of co-training has been only loosely related to empirically useful co-training algorithms. Here we give a new PAC-style bound on generalization error which justifies both the use of confidences — partial rules and partial labeling of the unlabeled data — and the use of an agreement-based objective function as sug- gested by Collins and Singer. Our bounds apply to the multiclass case, i.e., where instances are to be assigned one of

Cite

Text

Dasgupta et al. "PAC Generalization Bounds for Co-Training." Neural Information Processing Systems, 2001.

Markdown

[Dasgupta et al. "PAC Generalization Bounds for Co-Training." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/dasgupta2001neurips-pac/)

BibTeX

@inproceedings{dasgupta2001neurips-pac,
  title     = {{PAC Generalization Bounds for Co-Training}},
  author    = {Dasgupta, Sanjoy and Littman, Michael L. and McAllester, David A.},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {375-382},
  url       = {https://mlanthology.org/neurips/2001/dasgupta2001neurips-pac/}
}