Learning to Communicate and Solve Visual Blocks-World Tasks
Abstract
We study emergent communication between speaker and listener recurrent neural-network agents that are tasked to cooperatively construct a blocks-world target image sampled from a generative grammar of blocks configurations. The speaker receives the target image and learns to emit a sequence of discrete symbols from a fixed vocabulary. The listener learns to construct a blocks-world image by choosing block placement actions as a function of the speaker’s full utterance and the image of the ongoing construction. Our contributions are (a) the introduction of a task domain for studying emergent communication that is both challenging and affords useful analyses of the emergent protocols; (b) an empirical comparison of the interpolation and extrapolation performance of training via supervised, (contextual) Bandit, and reinforcement learning; and (c) evidence for the emergence of interesting linguistic properties in the RL agent protocol that are distinct from the other two.
Cite
Text
Zhang et al. "Learning to Communicate and Solve Visual Blocks-World Tasks." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015781Markdown
[Zhang et al. "Learning to Communicate and Solve Visual Blocks-World Tasks." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/zhang2019aaai-learning-c/) doi:10.1609/AAAI.V33I01.33015781BibTeX
@inproceedings{zhang2019aaai-learning-c,
title = {{Learning to Communicate and Solve Visual Blocks-World Tasks}},
author = {Zhang, Qi and Lewis, Richard L. and Singh, Satinder and Durfee, Edmund H.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {5781-5788},
doi = {10.1609/AAAI.V33I01.33015781},
url = {https://mlanthology.org/aaai/2019/zhang2019aaai-learning-c/}
}