Learning Abduction Under Partial Observability
Abstract
Our work extends Juba’s formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. We extend the formulation to consider partially observed examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting.
Cite
Text
Juba et al. "Learning Abduction Under Partial Observability." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12188Markdown
[Juba et al. "Learning Abduction Under Partial Observability." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/juba2018aaai-learning-a/) doi:10.1609/AAAI.V32I1.12188BibTeX
@inproceedings{juba2018aaai-learning-a,
title = {{Learning Abduction Under Partial Observability}},
author = {Juba, Brendan and Li, Zongyi and Miller, Evan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8097-8098},
doi = {10.1609/AAAI.V32I1.12188},
url = {https://mlanthology.org/aaai/2018/juba2018aaai-learning-a/}
}