Physical 3D Adversarial Attacks Against Monocular Depth Estimation in Autonomous Driving

Abstract

Deep learning-based monocular depth estimation (MDE) extensively applied in autonomous driving is known to be vulnerable to adversarial attacks. Previous physical attacks against MDE models rely on 2D adversarial patches so they only affect a small localized region in the MDE map but fail under various viewpoints. To address these limitations we propose 3D Depth Fool (3D^2Fool) the first 3D texture-based adversarial attack against MDE models. 3D^2Fool is specifically optimized to generate 3D adversarial textures agnostic to model types of vehicles and to have improved robustness in bad weather conditions such as rain and fog. Experimental results validate the superior performance of our 3D^2Fool across various scenarios including vehicles MDE models weather conditions and viewpoints. Real-world experiments with printed 3D textures on physical vehicle models further demonstrate that our 3D^2Fool can cause an MDE error of over 10 meters.

Cite

Text

Zheng et al. "Physical 3D Adversarial Attacks Against Monocular Depth Estimation in Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02308

Markdown

[Zheng et al. "Physical 3D Adversarial Attacks Against Monocular Depth Estimation in Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/zheng2024cvpr-physical/) doi:10.1109/CVPR52733.2024.02308

BibTeX

@inproceedings{zheng2024cvpr-physical,
  title     = {{Physical 3D Adversarial Attacks Against Monocular Depth Estimation in Autonomous Driving}},
  author    = {Zheng, Junhao and Lin, Chenhao and Sun, Jiahao and Zhao, Zhengyu and Li, Qian and Shen, Chao},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {24452-24461},
  doi       = {10.1109/CVPR52733.2024.02308},
  url       = {https://mlanthology.org/cvpr/2024/zheng2024cvpr-physical/}
}