Inferring the Goals of Communicating Agents from Actions and Instructions
Abstract
When humans cooperate, they frequently coordinate their activity through both verbal communication and non-verbal actions, using this information to infer a shared goal and plan. How can we model this inferential ability? In this paper, we introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant, using GPT-3 as a likelihood function for instruction utterances. We then show how a third person observer can infer the team's goal via multi-modal Bayesian inverse planning from actions and instructions, computing the posterior distribution over goals under the assumption that agents will act and communicate rationally to achieve them. We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments $(R = 0.96)$. When compared to inference from actions alone, we also find that instructions lead to more rapid and less uncertain goal inference, highlighting the importance of verbal communication for cooperative agents.
Cite
Text
Ying et al. "Inferring the Goals of Communicating Agents from Actions and Instructions." ICML 2023 Workshops: ToM, 2023.Markdown
[Ying et al. "Inferring the Goals of Communicating Agents from Actions and Instructions." ICML 2023 Workshops: ToM, 2023.](https://mlanthology.org/icmlw/2023/ying2023icmlw-inferring/)BibTeX
@inproceedings{ying2023icmlw-inferring,
title = {{Inferring the Goals of Communicating Agents from Actions and Instructions}},
author = {Ying, Lance and Zhi-Xuan, Tan and Mansinghka, Vikash and Tenenbaum, Joshua B.},
booktitle = {ICML 2023 Workshops: ToM},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/ying2023icmlw-inferring/}
}