Delving into Adversarial Attacks on Deep Policies

Abstract

Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples.

Cite

Text

Kos and Song. "Delving into Adversarial Attacks on Deep Policies." International Conference on Learning Representations, 2017.

Markdown

[Kos and Song. "Delving into Adversarial Attacks on Deep Policies." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/kos2017iclr-delving/)

BibTeX

@inproceedings{kos2017iclr-delving,
  title     = {{Delving into Adversarial Attacks on Deep Policies}},
  author    = {Kos, Jernej and Song, Dawn},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/kos2017iclr-delving/}
}