VLN BERT: A Recurrent Vision-and-Language BERT for Navigation
Abstract
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
Cite
Text
Hong et al. "VLN BERT: A Recurrent Vision-and-Language BERT for Navigation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00169Markdown
[Hong et al. "VLN BERT: A Recurrent Vision-and-Language BERT for Navigation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/hong2021cvpr-vln/) doi:10.1109/CVPR46437.2021.00169BibTeX
@inproceedings{hong2021cvpr-vln,
title = {{VLN BERT: A Recurrent Vision-and-Language BERT for Navigation}},
author = {Hong, Yicong and Wu, Qi and Qi, Yuankai and Rodriguez-Opazo, Cristian and Gould, Stephen},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {1643-1653},
doi = {10.1109/CVPR46437.2021.00169},
url = {https://mlanthology.org/cvpr/2021/hong2021cvpr-vln/}
}