RLNet: Adaptive Fusion of 4D Radar and LiDAR for 3D Object Detection
Abstract
Lidar based 3D object detection has made great progress in recent years and has become the mainstream configuration for autonomous vehicles. However, Lidar can experience substantial performance degradation in the case of adverse weather or long-distance object detection, due to its short wavelength and the limitation of energy emission. 4D millimeter-wave radar is capable of providing 3D point clouds similar to Lidar, with much more robustness against adverse weather conditions. However, 3D object detection with only 4D radar is less satisfactory due to the high sparsity and flickering nature of the measurements. In this paper, we propose a novel 3D object detection method termed RLNet, which effectively integrates 4D radar and Lidar through adaptive feature fusion. An adaptive radar point speed compensation and a modality dropout training strategy are further introduced to improve the performance. RLNet achieves the state-of-the-art performance in the experiments, outperforming baseline method by 7.35 and 2.76% in mAP on the popular VoD and ZJUODset dataset, respectively. The code will be available.
Cite
Text
Xu and Xiang. "RLNet: Adaptive Fusion of 4D Radar and LiDAR for 3D Object Detection." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91767-7_13Markdown
[Xu and Xiang. "RLNet: Adaptive Fusion of 4D Radar and LiDAR for 3D Object Detection." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/xu2024eccvw-rlnet/) doi:10.1007/978-3-031-91767-7_13BibTeX
@inproceedings{xu2024eccvw-rlnet,
title = {{RLNet: Adaptive Fusion of 4D Radar and LiDAR for 3D Object Detection}},
author = {Xu, Ruoyu and Xiang, Zhiyu},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {181-194},
doi = {10.1007/978-3-031-91767-7_13},
url = {https://mlanthology.org/eccvw/2024/xu2024eccvw-rlnet/}
}