V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

Abstract

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.

Cite

Text

Wang et al. "V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58536-5_36

Markdown

[Wang et al. "V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/wang2020eccv-v2vnet/) doi:10.1007/978-3-030-58536-5_36

BibTeX

@inproceedings{wang2020eccv-v2vnet,
  title     = {{V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction}},
  author    = {Wang, Tsun-Hsuan and Manivasagam, Sivabalan and Liang, Ming and Yang, Bin and Zeng, Wenyuan and Urtasun, Raquel},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58536-5_36},
  url       = {https://mlanthology.org/eccv/2020/wang2020eccv-v2vnet/}
}