Adversarial Attacks on Stochastic Bandits

Abstract

We study adversarial attacks that manipulate the reward signals to control the actions chosen by a stochastic multi-armed bandit algorithm. We propose the first attack against two popular bandit algorithms: $\epsilon$-greedy and UCB, \emph{without} knowledge of the mean rewards. The attacker is able to spend only logarithmic effort, multiplied by a problem-specific parameter that becomes smaller as the bandit problem gets easier to attack. The result means the attacker can easily hijack the behavior of the bandit algorithm to promote or obstruct certain actions, say, a particular medical treatment. As bandits are seeing increasingly wide use in practice, our study exposes a significant security threat.

Cite

Text

Jun et al. "Adversarial Attacks on Stochastic Bandits." Neural Information Processing Systems, 2018.

Markdown

[Jun et al. "Adversarial Attacks on Stochastic Bandits." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/jun2018neurips-adversarial/)

BibTeX

@inproceedings{jun2018neurips-adversarial,
  title     = {{Adversarial Attacks on Stochastic Bandits}},
  author    = {Jun, Kwang-Sung and Li, Lihong and Ma, Yuzhe and Zhu, Xiaojin},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3640-3649},
  url       = {https://mlanthology.org/neurips/2018/jun2018neurips-adversarial/}
}