DSDNet: Deep Structured Self-Driving Network
Abstract
In this paper, we propose the Deep Structured self-Driving Network (DSDNet), which performs object detection, motion prediction, and motion planning with a single neural network. Towards this goal, we develop a deep structured energy based model which considers the interactions between actors and produces socially consistent multimodal future predictions. Furthermore, DSDNet explicitly exploits the predicted future distributions of actors to plan a safe maneuver by using a structured planning cost. Our sample-based formulation allows us to overcome the difficulty in probabilistic inference of continuous random variables. Experiments on a number of large-scale self driving datasets demonstrate that our model significantly outperforms the state-of-the-art.
Cite
Text
Zeng et al. "DSDNet: Deep Structured Self-Driving Network." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58589-1_10Markdown
[Zeng et al. "DSDNet: Deep Structured Self-Driving Network." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zeng2020eccv-dsdnet/) doi:10.1007/978-3-030-58589-1_10BibTeX
@inproceedings{zeng2020eccv-dsdnet,
title = {{DSDNet: Deep Structured Self-Driving Network}},
author = {Zeng, Wenyuan and Wang, Shenlong and Liao, Renjie and Chen, Yun and Yang, Bin and Urtasun, Raquel},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58589-1_10},
url = {https://mlanthology.org/eccv/2020/zeng2020eccv-dsdnet/}
}