Optimal Layered Learning: A PAC Approach to Incremental Sampling

Abstract

It is best to learn a large theory in small pieces. An approach called “layered learning” starts by learning an approximately correct theory. The errors of this approximation are then used to construct a second-order “correcting” theory, which will again be only approximately correct. The process is iterated until some desired level of overall theory accuracy is met. The main advantage of this approach is that the sizes of successive training sets (errors of the hypothesis from the last iteration) are kept low. General lower-bound PAC-learning results are used in this paper to show that optimal layered learning results in the total training set size ( t ) increasing linearly in the number of layers. Meanwhile the total training and test set size ( m ) increases exponentially and the error ( e ) decreases exponentially. As a consequence, a model of layered learning which requires that t , rather than m , be a polynomial function of the logarithm of the concept space would make learnable many concept classes which are not learnable in Valiant's PAC model.

Cite

Text

Muggleton. "Optimal Layered Learning: A PAC Approach to Incremental Sampling." International Conference on Algorithmic Learning Theory, 1993. doi:10.1007/3-540-57370-4_35

Markdown

[Muggleton. "Optimal Layered Learning: A PAC Approach to Incremental Sampling." International Conference on Algorithmic Learning Theory, 1993.](https://mlanthology.org/alt/1993/muggleton1993alt-optimal/) doi:10.1007/3-540-57370-4_35

BibTeX

@inproceedings{muggleton1993alt-optimal,
  title     = {{Optimal Layered Learning: A PAC Approach to Incremental Sampling}},
  author    = {Muggleton, Stephen H.},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {1993},
  pages     = {37-44},
  doi       = {10.1007/3-540-57370-4_35},
  url       = {https://mlanthology.org/alt/1993/muggleton1993alt-optimal/}
}