Learning Rules from Incomplete Examples via Implicit Mention Models
Abstract
We study the problem of learning general rules from concrete facts extracted from natural data sources such as the newspaper stories and medical histories. Natural data sources present two challenges to automated learning, namely, radical incompleteness and systematic bias. In this paper, we propose an approach that combines simultaneous learning of multiple predictive rules with differential scoring of evidence which adapts to a presumed model of data generation. Learning multiple predicates simultaneously mitigates the problem of radical incompleteness, while the differential scoring would help reduce the effects of systematic bias. We evaluate our approach empirically on both textual and non-textual sources. We further present a theoretical analysis that elucidates our approach and explains the empirical results.
Cite
Text
Doppa et al. "Learning Rules from Incomplete Examples via Implicit Mention Models." Proceedings of the Third Asian Conference on Machine Learning, 2011.Markdown
[Doppa et al. "Learning Rules from Incomplete Examples via Implicit Mention Models." Proceedings of the Third Asian Conference on Machine Learning, 2011.](https://mlanthology.org/acml/2011/doppa2011acml-learning/)BibTeX
@inproceedings{doppa2011acml-learning,
title = {{Learning Rules from Incomplete Examples via Implicit Mention Models}},
author = {Doppa, Janardhan Rao and Sorower, Mohammad Shahed and Nasresfahani, Mohammad and Irvine, Jed and Orr, Walker and Dietterich, Thomas G. and Fern, Xiaoli and Tadepalli, Prasad},
booktitle = {Proceedings of the Third Asian Conference on Machine Learning},
year = {2011},
pages = {197-212},
volume = {20},
url = {https://mlanthology.org/acml/2011/doppa2011acml-learning/}
}