V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer

Abstract

In this paper, we investigate the application of Vehicle-to-Everything (V2X) communication to improve the perception performance of autonomous vehicles. We present a robust cooperative perception framework with V2X communication using a novel vision Transformer. Specifically, we build a holistic attention model, namely V2X-ViT, to effectively fuse information across on-road agents (i.e., vehicles and infrastructure). V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention, which captures inter-agent interaction and per-agent spatial relationships. These key modules are designed in a unified Transformer architecture to handle common V2X challenges, including asynchronous information sharing, pose errors, and heterogeneity of V2X components. To validate our approach, we create a large-scale V2X perception dataset using CARLA and OpenCDA. Extensive experimental results demonstrate that V2X-ViT sets new state-of-the-art performance for 3D object detection and achieves robust performance even under harsh noisy environments. The code is available at https://github.com/DerrickXuNu/v2x-vit.

Cite

Text

Xu et al. "V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19842-7_7

Markdown

[Xu et al. "V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/xu2022eccv-v2xvit/) doi:10.1007/978-3-031-19842-7_7

BibTeX

@inproceedings{xu2022eccv-v2xvit,
  title     = {{V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer}},
  author    = {Xu, Runsheng and Xiang, Hao and Tu, Zhengzhong and Xia, Xin and Yang, Ming-Hsuan and Ma, Jiaqi},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19842-7_7},
  url       = {https://mlanthology.org/eccv/2022/xu2022eccv-v2xvit/}
}