Learning End-to-End Multimodal Sensor Policies for Autonomous Navigation
Abstract
We proposed a multimodal end-to-end policy based on deep reinforcement learning (DRL) that leverages sensor fusion to reduced performance drops in noisy environment from 50% to 10% compared with the baseline and makes the policy functional even in the face of partial sensor failure by using a novel stochastic technique called Sensor Dropout to reduce sensitivity to any sensor subset, and a new auxiliary loss on policy network along with standard DRL loss that reduces the action variations.
Cite
Text
Liu et al. "Learning End-to-End Multimodal Sensor Policies for Autonomous Navigation." Proceedings of the 1st Annual Conference on Robot Learning, 2017.Markdown
[Liu et al. "Learning End-to-End Multimodal Sensor Policies for Autonomous Navigation." Proceedings of the 1st Annual Conference on Robot Learning, 2017.](https://mlanthology.org/corl/2017/liu2017corl-learning/)BibTeX
@inproceedings{liu2017corl-learning,
title = {{Learning End-to-End Multimodal Sensor Policies for Autonomous Navigation}},
author = {Liu, Guan-Horng and Siravuru, Avinash and Prabhakar, Sai and Veloso, Manuela and Kantor, George},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
year = {2017},
pages = {249-261},
volume = {78},
url = {https://mlanthology.org/corl/2017/liu2017corl-learning/}
}