Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog
Abstract
Goal-oriented dialog has been given attention due to its numerous applications in artificial intelligence. Goal-oriented dialogue tasks occur when a questioner asks an action-oriented question and an answerer responds with the intent of letting the questioner know a correct action to take. To ask the adequate question, deep learning and reinforcement learning have been recently applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose "Answerer in Questioner's Mind" (AQM), a novel information theoretic algorithm for goal-oriented dialog. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intention via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialog tasks: "MNIST Counting Dialog" and "GuessWhat?!". In our experiments, AQM outperforms comparative algorithms by a large margin.
Cite
Text
Lee et al. "Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog." Neural Information Processing Systems, 2018.Markdown
[Lee et al. "Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/lee2018neurips-answerer/)BibTeX
@inproceedings{lee2018neurips-answerer,
title = {{Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog}},
author = {Lee, Sang-Woo and Heo, Yu-Jung and Zhang, Byoung-Tak},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {2579-2589},
url = {https://mlanthology.org/neurips/2018/lee2018neurips-answerer/}
}