Soft Expert Reward Learning for Vision-and-Language Navigation

Abstract

Vision-and-Language Navigation (VLN) requires an agent to find a specified spot in an unseen environment by following natural language instructions. Dominant methods based on supervised learning clone expert's behaviours and thus perform better on seen environments, while showing restricted performance on unseen ones. Reinforcement Learning (RL) based models show better generalisation ability but have issues as well, requiring large amount of manual reward engineering is one of which. In this paper, we introduce a Soft Expert Reward Learning (SERL) model to overcome the reward engineering designing and generalisation problems of the VLN task. Our proposed method consists of two complementary components: Soft Expert Distillation (SED) module encourages agents to behave like an expert as much as possible, but in a soft fashion; Self Perceiving (SP) module targets at pushing the agent towards the final destination as fast as possible. Empirically, we evaluate our model on the VLN seen, unseen and test splits and the model outperforms the state-of-the-art methods on most of the evaluation metrics.

Cite

Text

Wang et al. "Soft Expert Reward Learning for Vision-and-Language Navigation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58545-7_8

Markdown

[Wang et al. "Soft Expert Reward Learning for Vision-and-Language Navigation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/wang2020eccv-soft/) doi:10.1007/978-3-030-58545-7_8

BibTeX

@inproceedings{wang2020eccv-soft,
  title     = {{Soft Expert Reward Learning for Vision-and-Language Navigation}},
  author    = {Wang, Hu and Wu, Qi and Shen, Chunhua},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58545-7_8},
  url       = {https://mlanthology.org/eccv/2020/wang2020eccv-soft/}
}