End-to-End Learning of Driving Models from Large-Scale Video Datasets
Abstract
Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.
Cite
Text
Xu et al. "End-to-End Learning of Driving Models from Large-Scale Video Datasets." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.376Markdown
[Xu et al. "End-to-End Learning of Driving Models from Large-Scale Video Datasets." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/xu2017cvpr-endtoend/) doi:10.1109/CVPR.2017.376BibTeX
@inproceedings{xu2017cvpr-endtoend,
title = {{End-to-End Learning of Driving Models from Large-Scale Video Datasets}},
author = {Xu, Huazhe and Gao, Yang and Yu, Fisher and Darrell, Trevor},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.376},
url = {https://mlanthology.org/cvpr/2017/xu2017cvpr-endtoend/}
}