Learning with Knowledge from Multiple Experts
Abstract
The use of domain knowledge in a learner can greatly improve the models it produces. However, high-quality expert knowledge is very difficult to obtain. Traditionally, researchers have assumed that knowledge comes from a single self-consistent source. A little-explored but often more feasible alternative is to use multiple weaker sources. In this paper we take a step in this direction by developing a method for learning the structure of a Bayesian network from multiple experts. Data is then used to refine the structure and estimate parameters. A simple analysis shows that even relatively few noisy experts can produce high-quality knowledge when combined. Experiments with real and simulated experts in a variety of domains show the benefits of this approach. ICML Proceedings of the Twentieth International Conference on Machine Learning
Cite
Text
Richardson and Domingos. "Learning with Knowledge from Multiple Experts." International Conference on Machine Learning, 2003.Markdown
[Richardson and Domingos. "Learning with Knowledge from Multiple Experts." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/richardson2003icml-learning/)BibTeX
@inproceedings{richardson2003icml-learning,
title = {{Learning with Knowledge from Multiple Experts}},
author = {Richardson, Matthew and Domingos, Pedro M.},
booktitle = {International Conference on Machine Learning},
year = {2003},
pages = {624-631},
url = {https://mlanthology.org/icml/2003/richardson2003icml-learning/}
}