Trust-Sensitive Belief Revision

Abstract

Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed.

Cite

Text

Hunter and Booth. "Trust-Sensitive Belief Revision." International Joint Conference on Artificial Intelligence, 2015.

Markdown

[Hunter and Booth. "Trust-Sensitive Belief Revision." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/hunter2015ijcai-trust/)

BibTeX

@inproceedings{hunter2015ijcai-trust,
  title     = {{Trust-Sensitive Belief Revision}},
  author    = {Hunter, Aaron and Booth, Richard},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {3062-3068},
  url       = {https://mlanthology.org/ijcai/2015/hunter2015ijcai-trust/}
}