Using Attention in Belief Revision

Abstract

Belief revision for an intelligent system is usually computationally expensive. Here we tackle this problem by using focus in belief revision: that is, revision occurs only in a subset of beliefs under attention (or in focus). Attention can be shifted within the belief base, thus allowing use and revision of other subsets of beliefs. This attention-shifting belief revision architecture shows promise to allow efficient and natural revision of belief bases.

Cite

Text

Huang et al. "Using Attention in Belief Revision." AAAI Conference on Artificial Intelligence, 1991.

Markdown

[Huang et al. "Using Attention in Belief Revision." AAAI Conference on Artificial Intelligence, 1991.](https://mlanthology.org/aaai/1991/huang1991aaai-using/)

BibTeX

@inproceedings{huang1991aaai-using,
  title     = {{Using Attention in Belief Revision}},
  author    = {Huang, Xueming and McCalla, Gordon I. and Neufeld, Eric},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {1991},
  pages     = {275-280},
  url       = {https://mlanthology.org/aaai/1991/huang1991aaai-using/}
}