PAC Learning Under Helpful Distributions

Abstract

A PAC model under helpful distributions is introduced. A teacher associates a teaching set with each target concept and we only consider distributions such that each example in the teaching set has a non-zero weight. The performance of a learning algorithm depends on the probabilities of the examples in this teaching set. In this model, an Occam's razor theorem and its converse are proved. The class of decision lists is proved PAC learnable under helpful distributions. A PAC learning model with simple teacher (simplicity is based on program-size complexity) is also defined and the model is compared with other models of teaching.

Cite

Text

Denis and Gilleron. "PAC Learning Under Helpful Distributions." International Conference on Algorithmic Learning Theory, 1997. doi:10.1007/3-540-63577-7_40

Markdown

[Denis and Gilleron. "PAC Learning Under Helpful Distributions." International Conference on Algorithmic Learning Theory, 1997.](https://mlanthology.org/alt/1997/denis1997alt-pac/) doi:10.1007/3-540-63577-7_40

BibTeX

@inproceedings{denis1997alt-pac,
  title     = {{PAC Learning Under Helpful Distributions}},
  author    = {Denis, François and Gilleron, Rémi},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {1997},
  pages     = {132-145},
  doi       = {10.1007/3-540-63577-7_40},
  url       = {https://mlanthology.org/alt/1997/denis1997alt-pac/}
}