Belief Management for High-Level Robot Programs

Abstract

The robot programming and plan language IndiGolog allows for on-line execution of actions and offline projections of programs in dynamic and partly unknown environments. Basic assumptions are that the outcomes of primitive and sensing actions are correctly modeled, and that the agent is informed about all exogenous events beyond its control. In real-world applications, however, such assumptions do not hold. In fact, an action’s outcome is error-prone and sensing results are noisy. In this paper, we present a belief management system in IndiGolog that is able to detect inconsistencies between a robot’s modeled belief and what happened in reality. The system furthermore derives explanations and maintains a consistent belief. Our main contributions are (1) a belief management system following a history-based diagnosis approach that allows an agent to actively cope with faulty actions and the occurrence of exogenous events; and (2) an implementation in IndiGolog and experimental results from a delivery domain.

Cite

Text

Gspandl et al. "Belief Management for High-Level Robot Programs." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-156

Markdown

[Gspandl et al. "Belief Management for High-Level Robot Programs." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/gspandl2011ijcai-belief/) doi:10.5591/978-1-57735-516-8/IJCAI11-156

BibTeX

@inproceedings{gspandl2011ijcai-belief,
  title     = {{Belief Management for High-Level Robot Programs}},
  author    = {Gspandl, Stephan and Pill, Ingo and Reip, Michael and Steinbauer, Gerald and Ferrein, Alexander},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2011},
  pages     = {900-905},
  doi       = {10.5591/978-1-57735-516-8/IJCAI11-156},
  url       = {https://mlanthology.org/ijcai/2011/gspandl2011ijcai-belief/}
}