Building Goal-Oriented Dialogue Systems with Situated Visual Context

Abstract

Goal-oriented dialogue agents can comfortably utilize the conversational context and understand its users' goals. However, in visually driven user experiences, these conversational agents are also required to make sense of the screen context in order to provide a proper interactive experience. In this paper, we propose a novel multimodal conversational framework where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. We demonstrate the proposed approach via a prototypical furniture shopping experience for a multimodal virtual assistant.

Cite

Text

Agarwal et al. "Building Goal-Oriented Dialogue Systems with Situated Visual Context." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21710

Markdown

[Agarwal et al. "Building Goal-Oriented Dialogue Systems with Situated Visual Context." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/agarwal2022aaai-building/) doi:10.1609/AAAI.V36I11.21710

BibTeX

@inproceedings{agarwal2022aaai-building,
  title     = {{Building Goal-Oriented Dialogue Systems with Situated Visual Context}},
  author    = {Agarwal, Sanchit and Jezabek, Jan and Biswas, Arijit and Barut, Emre and Gao, Bill and Chung, Tagyoung},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {13149-13151},
  doi       = {10.1609/AAAI.V36I11.21710},
  url       = {https://mlanthology.org/aaai/2022/agarwal2022aaai-building/}
}