Trust as a Precursor to Belief Revision
Abstract
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information.
Cite
Text
Booth and Hunter. "Trust as a Precursor to Belief Revision." Journal of Artificial Intelligence Research, 2018. doi:10.1613/JAIR.5521Markdown
[Booth and Hunter. "Trust as a Precursor to Belief Revision." Journal of Artificial Intelligence Research, 2018.](https://mlanthology.org/jair/2018/booth2018jair-trust/) doi:10.1613/JAIR.5521BibTeX
@article{booth2018jair-trust,
title = {{Trust as a Precursor to Belief Revision}},
author = {Booth, Richard and Hunter, Aaron},
journal = {Journal of Artificial Intelligence Research},
year = {2018},
pages = {699-722},
doi = {10.1613/JAIR.5521},
volume = {61},
url = {https://mlanthology.org/jair/2018/booth2018jair-trust/}
}