Active Learning for Function Approximation
Abstract
We develop a principled strategy to sample a function optimally for function approximation tasks within a Bayesian framework. Using ideas from optimal experiment design, we introduce an objective function (incorporating both bias and variance) to measure the de(cid:173) gree of approximation, and the potential utility of the data points towards optimizing this objective. We show how the general strat(cid:173) egy can be used to derive precise algorithms to select data for two cases: learning unit step functions and polynomial functions. In particular, we investigate whether such active algorithms can learn the target with fewer examples. We obtain theoretical and empir(cid:173) ical results to suggest that this is the case.
Cite
Text
Sung and Niyogi. "Active Learning for Function Approximation." Neural Information Processing Systems, 1994.Markdown
[Sung and Niyogi. "Active Learning for Function Approximation." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/sung1994neurips-active/)BibTeX
@inproceedings{sung1994neurips-active,
title = {{Active Learning for Function Approximation}},
author = {Sung, Kah Kay and Niyogi, Partha},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {593-600},
url = {https://mlanthology.org/neurips/1994/sung1994neurips-active/}
}