Bootstrapping Autonomous Driving Radars with Self-Supervised Learning

Abstract

The perception of autonomous vehicles using radars has attracted increased research interest due its ability to operate in fog and bad weather. However training radar models is hindered by the cost and difficulty of annotating large-scale radar data. To overcome this bottleneck we propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks. The proposed method combines radar-to-radar and radar-to-vision contrastive losses to learn a general representation from unlabeled radar heatmaps paired with their corresponding camera images. When used for downstream object detection we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by 5.8% in mAP. Code is available at https://github.com/yiduohao/Radical.

Cite

Text

Hao et al. "Bootstrapping Autonomous Driving Radars with Self-Supervised Learning." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01422

Markdown

[Hao et al. "Bootstrapping Autonomous Driving Radars with Self-Supervised Learning." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/hao2024cvpr-bootstrapping/) doi:10.1109/CVPR52733.2024.01422

BibTeX

@inproceedings{hao2024cvpr-bootstrapping,
  title     = {{Bootstrapping Autonomous Driving Radars with Self-Supervised Learning}},
  author    = {Hao, Yiduo and Madani, Sohrab and Guan, Junfeng and Alloulah, Mohammed and Gupta, Saurabh and Hassanieh, Haitham},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {15012-15023},
  doi       = {10.1109/CVPR52733.2024.01422},
  url       = {https://mlanthology.org/cvpr/2024/hao2024cvpr-bootstrapping/}
}