Gradient-Free Adversarial Training Against Image Corruption for Learning-Based Steering

Abstract

We introduce a simple yet effective framework for improving the robustness of learning algorithms against image corruptions for autonomous driving. These corruptions can occur due to both internal (e.g., sensor noises and hardware abnormalities) and external factors (e.g., lighting, weather, visibility, and other environmental effects). Using sensitivity analysis with FID-based parameterization, we propose a novel algorithm exploiting basis perturbations to improve the overall performance of autonomous steering and other image processing tasks, such as classification and detection, for self-driving cars. Our model not only improves the performance on the original dataset, but also achieves significant performance improvement on datasets with multiple and unseen perturbations, up to 87% and 77%, respectively. A comparison between our approach and other SOTA techniques confirms the effectiveness of our technique in improving the robustness of neural network training for learning-based steering and other image processing tasks.

Cite

Text

Shen et al. "Gradient-Free Adversarial Training Against Image Corruption for Learning-Based Steering." Neural Information Processing Systems, 2021.

Markdown

[Shen et al. "Gradient-Free Adversarial Training Against Image Corruption for Learning-Based Steering." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/shen2021neurips-gradientfree/)

BibTeX

@inproceedings{shen2021neurips-gradientfree,
  title     = {{Gradient-Free Adversarial Training Against Image Corruption for Learning-Based Steering}},
  author    = {Shen, Yu and Zheng, Laura and Shu, Manli and Li, Weizi and Goldstein, Tom and Lin, Ming},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/shen2021neurips-gradientfree/}
}