Sequential Inductive Learning
Abstract
In this paper I advocate a new model for inductive learning. Called sequential induction, this model bridges classical fixed-sample learning techniques (which are efficient but ad hoc), and worst-case approaches (which provide strong statistical guarantees but are too inefficient for practical use). According to the sequential inductive model, learning is a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the minimum data necessary to make each decision, sequential inductive techniques can provide the strong statistical guarantees of worst-case methods, but with substantially less data than those methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to megainduction, where the preponderance of data introduces problems of scale. The peepholing and decision-theoretic subsampling approaches of Catlet...
Cite
Text
Gratch. "Sequential Inductive Learning." AAAI Conference on Artificial Intelligence, 1996.Markdown
[Gratch. "Sequential Inductive Learning." AAAI Conference on Artificial Intelligence, 1996.](https://mlanthology.org/aaai/1996/gratch1996aaai-sequential/)BibTeX
@inproceedings{gratch1996aaai-sequential,
title = {{Sequential Inductive Learning}},
author = {Gratch, Jonathan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1996},
pages = {779-786},
url = {https://mlanthology.org/aaai/1996/gratch1996aaai-sequential/}
}