Revision of Reduced Theories
Abstract
Revising a theory to cover a set of new data can be very difficult. When the theory is represented by propositional rules with uncertainty, many strength refinement problems are shown to be computationally intractable (NP-hard) to solve in a “deep― theory. In this paper, we show that when the same theory is represented in a reduced or flat structure some refinement problems are strictly easier to solve. We prove that some strength refinement problems that are NP-hard in the “deep― theory can be solved in polynomial time in the reduced theory, while some others still remain NP-hard.
Cite
Text
Ling and Valtorta. "Revision of Reduced Theories." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50106-9Markdown
[Ling and Valtorta. "Revision of Reduced Theories." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/ling1991icml-revision/) doi:10.1016/B978-1-55860-200-7.50106-9BibTeX
@inproceedings{ling1991icml-revision,
title = {{Revision of Reduced Theories}},
author = {Ling, Xiaofeng and Valtorta, Marco},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {519-523},
doi = {10.1016/B978-1-55860-200-7.50106-9},
url = {https://mlanthology.org/icml/1991/ling1991icml-revision/}
}