Learning to mAP Natural Language Instructions to Physical Quadcopter Control Using Simulated Flight
Abstract
We propose a joint simulation and real-world learning framework for mapping navigation instructions and raw first-person observations to continuous control. Our model estimates the need for environment exploration, predicts the likelihood of visiting environment positions during execution, and controls the agent to both explore and visit high-likelihood positions. We introduce Supervised Reinforcement Asynchronous Learning (SuReAL). Learning uses both simulation and real environments without requiring autonomous flight in the physical environment during training, and combines supervised learning for predicting positions to visit and reinforcement learning for continuous control. We evaluate our approach on a natural language instruction-following task with a physical quadcopter, and demonstrate effective execution and exploration behavior.
Cite
Text
Blukis et al. "Learning to mAP Natural Language Instructions to Physical Quadcopter Control Using Simulated Flight." Conference on Robot Learning, 2019.Markdown
[Blukis et al. "Learning to mAP Natural Language Instructions to Physical Quadcopter Control Using Simulated Flight." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/blukis2019corl-learning/)BibTeX
@inproceedings{blukis2019corl-learning,
title = {{Learning to mAP Natural Language Instructions to Physical Quadcopter Control Using Simulated Flight}},
author = {Blukis, Valts and Terme, Yannick and Niklasson, Eyvind and Knepper, Ross A. and Artzi, Yoav},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {1415-1438},
volume = {100},
url = {https://mlanthology.org/corl/2019/blukis2019corl-learning/}
}