Learning to Set Waypoints for Audio-Visual Navigation

Abstract

In audio-visual navigation, an agent intelligently travels through a complex, unmapped 3D environment using both sights and sounds to find a sound source (e.g., a phone ringing in another room). Existing models learn to act at a fixed granularity of agent motion and rely on simple recurrent aggregations of the audio observations. We introduce a reinforcement learning approach to audio-visual navigation with two key novel elements: 1) waypoints that are dynamically set and learned end-to-end within the navigation policy, and 2) an acoustic memory that provides a structured, spatially grounded record of what the agent has heard as it moves. Both new ideas capitalize on the synergy of audio and visual data for revealing the geometry of an unmapped space. We demonstrate our approach on two challenging datasets of real-world 3D scenes, Replica and Matterport3D. Our model improves the state of the art by a substantial margin, and our experiments reveal that learning the links between sights, sounds, and space is essential for audio-visual navigation.

Cite

Text

Chen et al. "Learning to Set Waypoints for Audio-Visual Navigation." International Conference on Learning Representations, 2021.

Markdown

[Chen et al. "Learning to Set Waypoints for Audio-Visual Navigation." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/chen2021iclr-learning-a/)

BibTeX

@inproceedings{chen2021iclr-learning-a,
  title     = {{Learning to Set Waypoints for Audio-Visual Navigation}},
  author    = {Chen, Changan and Majumder, Sagnik and Al-Halah, Ziad and Gao, Ruohan and Ramakrishnan, Santhosh Kumar and Grauman, Kristen},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/chen2021iclr-learning-a/}
}