Detecting Execution Failures Using Learned Action Models
Abstract
Planners reason with abstracted models of the behaviours they use to construct plans. When plans are turned into the instructions that drive an executive, the real behaviours in-teracting with the unpredictable uncertainties of the environ-ment can lead to failure. One of the challenges for intelligent autonomy is to recognise when the actual execution of a be-haviour has diverged so far from the expected behaviour that it can be considered to be a failure. In this paper we present an approach by which a trace of the execution of a behaviour is monitored by tracking its most likely explanation through a learned model of how the behaviour is normally executed. In this way, possible failures are identified as deviations from common patterns of the execution of the behaviour. We per-form an experiment in which we inject errors into the be-haviour of a robot performing a particular task, and explore how well a learned model of the task can detect where these errors occur. 1
Cite
Text
Fox et al. "Detecting Execution Failures Using Learned Action Models." AAAI Conference on Artificial Intelligence, 2007.Markdown
[Fox et al. "Detecting Execution Failures Using Learned Action Models." AAAI Conference on Artificial Intelligence, 2007.](https://mlanthology.org/aaai/2007/fox2007aaai-detecting/)BibTeX
@inproceedings{fox2007aaai-detecting,
title = {{Detecting Execution Failures Using Learned Action Models}},
author = {Fox, Maria and Gough, Jonathan and Long, Derek},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2007},
pages = {968-973},
url = {https://mlanthology.org/aaai/2007/fox2007aaai-detecting/}
}