Learning from an Approximate Theory and Noisy Examples
Abstract
This paper presents an approach to a new leanring problem, the problem of learning from an approximate theory and a set of noisy examples. This problem requires a new learning approach since it cannot be satisfactorily solved by either indictive, or analytic learning algorithms or their existing combinations. Our approach can be viewed as an extension of the minimum description length (MDL) principle, and is unique in that it is based on the encoding of the refinement required to transform the given theory into a better theory rather than on the encoding of the resultant theory as in traditional MDL. Experimental results show that, based on our approach. the theory learned from an approximate theory and a set of noisy examples is more accnrate than either the approximate theory itself or a theory learned from the examples alone. This suggests that our approach can combine useful iuformation from both the theory and the training set even though both of them are only partially correct.
Cite
Text
Tangkitvanich and Shimura. "Learning from an Approximate Theory and Noisy Examples." AAAI Conference on Artificial Intelligence, 1993. doi:10.11501/3077745Markdown
[Tangkitvanich and Shimura. "Learning from an Approximate Theory and Noisy Examples." AAAI Conference on Artificial Intelligence, 1993.](https://mlanthology.org/aaai/1993/tangkitvanich1993aaai-learning/) doi:10.11501/3077745BibTeX
@inproceedings{tangkitvanich1993aaai-learning,
title = {{Learning from an Approximate Theory and Noisy Examples}},
author = {Tangkitvanich, Somkiat and Shimura, Masamichi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1993},
pages = {466-471},
doi = {10.11501/3077745},
url = {https://mlanthology.org/aaai/1993/tangkitvanich1993aaai-learning/}
}