A Model of Decidable Introspective Reasoning with Quantifying-in

Abstract

Since knowledge is usually incomplete, agents need to introspect on what they know and do not know. The best known models of introspective reasoning suffer from intractability or even undecidability if the underlying language is first-order. To better suit the fact that agents have limited resources, we recently proposed a model of decidable introspective reasoning in first-order knowledge bases (KBs). However, this model is deficient in that it does not allow for quantifying-in, which is needed to distinguish between knowing that and knowing who. In this paper, we extend our earlier work by adding quantifying-in and equality to a model of limited belief that integrates ideas from possible-world semantics and relevance logic.

Cite

Text

Lakemeyer. "A Model of Decidable Introspective Reasoning with Quantifying-in." International Joint Conference on Artificial Intelligence, 1991.

Markdown

[Lakemeyer. "A Model of Decidable Introspective Reasoning with Quantifying-in." International Joint Conference on Artificial Intelligence, 1991.](https://mlanthology.org/ijcai/1991/lakemeyer1991ijcai-model/)

BibTeX

@inproceedings{lakemeyer1991ijcai-model,
  title     = {{A Model of Decidable Introspective Reasoning with Quantifying-in}},
  author    = {Lakemeyer, Gerhard},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1991},
  pages     = {492-497},
  url       = {https://mlanthology.org/ijcai/1991/lakemeyer1991ijcai-model/}
}