On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

Abstract

Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing.

Cite

Text

Zhang et al. "On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01473

Markdown

[Zhang et al. "On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhang2022cvpr-adversarial/) doi:10.1109/CVPR52688.2022.01473

BibTeX

@inproceedings{zhang2022cvpr-adversarial,
  title     = {{On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles}},
  author    = {Zhang, Qingzhao and Hu, Shengtuo and Sun, Jiachen and Chen, Qi Alfred and Mao, Z. Morley},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {15159-15168},
  doi       = {10.1109/CVPR52688.2022.01473},
  url       = {https://mlanthology.org/cvpr/2022/zhang2022cvpr-adversarial/}
}