Multi-Agent Intention Progression with Reward Machines
Abstract
Recent work in multi-agent intention scheduling has shown that enabling agents to predict the actions of other agents when choosing their own actions can be beneficial. However existing approaches to 'intention-aware' scheduling assume that the programs of other agents are known, or are "similar" to that of the agent making the prediction. While this assumption is reasonable in some circumstances, it is less plausible when the agents are not co-designed. In this paper, we present a new approach to multi-agent intention scheduling in which agents predict the actions of other agents based on a high-level specification of the tasks performed by an agent in the form of a reward machine (RM) rather than on its (assumed) program. We show how a reward machine can be used to generate tree and rollout policies for an MCTS-based scheduler. We evaluate our approach in a range of multi-agent environments, and show that RM-based scheduling out-performs previous intention-aware scheduling approaches in settings where agents are not co-designed
Cite
Text
Dann et al. "Multi-Agent Intention Progression with Reward Machines." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/31Markdown
[Dann et al. "Multi-Agent Intention Progression with Reward Machines." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/dann2022ijcai-multi/) doi:10.24963/IJCAI.2022/31BibTeX
@inproceedings{dann2022ijcai-multi,
title = {{Multi-Agent Intention Progression with Reward Machines}},
author = {Dann, Michael and Yao, Yuan and Alechina, Natasha and Logan, Brian and Thangarajah, John},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {215-222},
doi = {10.24963/IJCAI.2022/31},
url = {https://mlanthology.org/ijcai/2022/dann2022ijcai-multi/}
}