Practical PAC Learning
Abstract
We present new strategies for "probably approximately correct" (pac) learning that use fewer training examples than previous approaches. The idea is to observe training examples one-at-a-time and decide "on-line" when to return a hypothesis, rather than collect a large fixed-size training sample. This yields sequential learning procedures that pac-learn by observing a small random number of examples. We provide theoretical bounds on the expected training sample size of our procedure --- but establish its efficiency primarily by a series of experiments which show sequential learning actually uses many times fewer training examples in practice. These results demonstrate that paclearning can be far more efficiently achieved in practice than previously thought. 1 Introduction We consider the standard problem of learning an accurate classifier from examples: given a target classification scheme c : X ! Y defined on a domain X, we are interested in observing a sequence of training examples...
Cite
Text
Schuurmans and Greiner. "Practical PAC Learning." International Joint Conference on Artificial Intelligence, 1995.Markdown
[Schuurmans and Greiner. "Practical PAC Learning." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/schuurmans1995ijcai-practical/)BibTeX
@inproceedings{schuurmans1995ijcai-practical,
title = {{Practical PAC Learning}},
author = {Schuurmans, Dale and Greiner, Russell},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1995},
pages = {1169-1177},
url = {https://mlanthology.org/ijcai/1995/schuurmans1995ijcai-practical/}
}