WhyNot: Debugging Failed Queries in Large Knowledge Bases

Abstract

When a query to a knowledge-based system fails and returns unknown, users are confronted with a problem: Is relevant knowledge missing or incorrect? Is there a problem with the inference engine? Was the query ill-conceived? Finding the culprit in a large and complex knowledge base can be a hard and laborious task for knowledge engineers and might be impossible for non-expert users. To support such situations we developed a new tool called as part of the PowerLoom knowledge representation and reasoning system. To debug a failed query, WhyNot tries to generate a small set of plausible partial proofs that can guide the user to what knowledge might have been missing, or where the system might have failed to make a relevant inference. A first version of the system has been deployed to help debug queries to a version of the Cyc knowledge base containing over 1,000,000 facts and over 35,000 rules.

Cite

Text

Chalupsky and Russ. "WhyNot: Debugging Failed Queries in Large Knowledge Bases." AAAI Conference on Artificial Intelligence, 2002. doi:10.5555/777092.777224

Markdown

[Chalupsky and Russ. "WhyNot: Debugging Failed Queries in Large Knowledge Bases." AAAI Conference on Artificial Intelligence, 2002.](https://mlanthology.org/aaai/2002/chalupsky2002aaai-whynot/) doi:10.5555/777092.777224

BibTeX

@inproceedings{chalupsky2002aaai-whynot,
  title     = {{WhyNot: Debugging Failed Queries in Large Knowledge Bases}},
  author    = {Chalupsky, Hans and Russ, Thomas A.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2002},
  pages     = {870-877},
  doi       = {10.5555/777092.777224},
  url       = {https://mlanthology.org/aaai/2002/chalupsky2002aaai-whynot/}
}