Cross-View Transformers for Real-Time mAP-View Semantic Segmentation

Abstract

We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.

Cite

Text

Zhou and Krähenbühl. "Cross-View Transformers for Real-Time mAP-View Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01339

Markdown

[Zhou and Krähenbühl. "Cross-View Transformers for Real-Time mAP-View Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhou2022cvpr-crossview/) doi:10.1109/CVPR52688.2022.01339

BibTeX

@inproceedings{zhou2022cvpr-crossview,
  title     = {{Cross-View Transformers for Real-Time mAP-View Semantic Segmentation}},
  author    = {Zhou, Brady and Krähenbühl, Philipp},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {13760-13769},
  doi       = {10.1109/CVPR52688.2022.01339},
  url       = {https://mlanthology.org/cvpr/2022/zhou2022cvpr-crossview/}
}