Causal Navigation by Continuous-Time Neural Networks
Abstract
Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
Cite
Text
Vorbach et al. "Causal Navigation by Continuous-Time Neural Networks." Neural Information Processing Systems, 2021.Markdown
[Vorbach et al. "Causal Navigation by Continuous-Time Neural Networks." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/vorbach2021neurips-causal/)BibTeX
@inproceedings{vorbach2021neurips-causal,
title = {{Causal Navigation by Continuous-Time Neural Networks}},
author = {Vorbach, Charles and Hasani, Ramin and Amini, Alexander and Lechner, Mathias and Rus, Daniela},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/vorbach2021neurips-causal/}
}