CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow
Abstract
Autonomous driving stands as a pivotal domain in computer vision shaping the future of transportation. Within this paradigm the backbone of the system plays a crucial role in interpreting the complex environment. However a notable challenge has been the loss of clear supervision when it comes to Bird's Eye View elements. To address this limitation we introduce CLIP-BEVFormer a novel approach that leverages the power of contrastive learning techniques to enhance the multi-view image-derived BEV backbones with ground truth information flow. We conduct extensive experiments on the challenging nuScenes dataset and showcase significant and consistent improvements over the SOTA. Specifically CLIP-BEVFormer achieves an impressive 8.5% and 9.2% enhancement in terms of NDS and mAP respectively over the previous best BEV model on the 3D object detection task.
Cite
Text
Pan et al. "CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01441Markdown
[Pan et al. "CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/pan2024cvpr-clipbevformer/) doi:10.1109/CVPR52733.2024.01441BibTeX
@inproceedings{pan2024cvpr-clipbevformer,
title = {{CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow}},
author = {Pan, Chenbin and Yaman, Burhaneddin and Velipasalar, Senem and Ren, Liu},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {15216-15225},
doi = {10.1109/CVPR52733.2024.01441},
url = {https://mlanthology.org/cvpr/2024/pan2024cvpr-clipbevformer/}
}