Flexibly Exploiting Prior Knowledge in Empirical Learning

Abstract

This paper presents a method to incorporate knowledge from possibly imperfect models and domain theories into inductive learning of decision trees for classification The approach assumes that a model or domain theory reflects useful prior knowledge of th< task Thus the default bias should accept the model s predictions as accurate even in the face of somewhat contradictory data which may be unrepresenlative or noisy However our approach allows the svslem to abandon the model or domain theorv, or portions thereof in the fact of sufficientlv contradictory data In particular we use C4 5 to induce decision trees from data that ha\\t heen augmented b \\ model or domaintheory-denvcd features ' We weakly bias the svslem to select model-derived features dur ing decision tree induction but this preference is not dogmatically applied Our experiments vary imperfection in a model the representa tiveness of data and the veracitv with which modfl-demed feature are preferred 1

Cite

Text

Ortega and Fisher. "Flexibly Exploiting Prior Knowledge in Empirical Learning." International Joint Conference on Artificial Intelligence, 1995.

Markdown

[Ortega and Fisher. "Flexibly Exploiting Prior Knowledge in Empirical Learning." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/ortega1995ijcai-flexibly/)

BibTeX

@inproceedings{ortega1995ijcai-flexibly,
  title     = {{Flexibly Exploiting Prior Knowledge in Empirical Learning}},
  author    = {Ortega, Julio and Fisher, Douglas},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1995},
  pages     = {1041-1049},
  url       = {https://mlanthology.org/ijcai/1995/ortega1995ijcai-flexibly/}
}