Learning from Demonstration for Goal-Driven Autonomy
Abstract
Goal-driven autonomy (GDA) is a conceptual model for creating an autonomous agent that monitors a set of expectations during plan execution, detects when discrepancies occur, builds explanations for the cause of failures, and formulates new goals to pursue when planning failures arise. While this framework enables the development of agents that can operate in complex and dynamic environments, implementing the logic for each of the subtasks in the model requires substantial domain engineering. We present a method using case-based reasoning and intent recognition in order to build GDA agents that learn from demonstrations. Our approach reduces the amount of domain engineering necessary to implement GDA agents and learns expectations, explanations, and goals from expert demonstrations. We have applied this approach to build an agent for the real-time strategy game StarCraft. Our results show that integrating the GDA conceptual model into the agent greatly improves its win rate.
Cite
Text
Weber et al. "Learning from Demonstration for Goal-Driven Autonomy." AAAI Conference on Artificial Intelligence, 2012. doi:10.1609/AAAI.V26I1.8311Markdown
[Weber et al. "Learning from Demonstration for Goal-Driven Autonomy." AAAI Conference on Artificial Intelligence, 2012.](https://mlanthology.org/aaai/2012/weber2012aaai-learning/) doi:10.1609/AAAI.V26I1.8311BibTeX
@inproceedings{weber2012aaai-learning,
title = {{Learning from Demonstration for Goal-Driven Autonomy}},
author = {Weber, Ben George and Mateas, Michael and Jhala, Arnav},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2012},
pages = {1176-1182},
doi = {10.1609/AAAI.V26I1.8311},
url = {https://mlanthology.org/aaai/2012/weber2012aaai-learning/}
}