Action Poisoning Attacks on Linear Contextual Bandits

Abstract

Contextual bandit algorithms have many applicants in a variety of scenarios. In order to develop trustworthy contextual bandit systems, understanding the impacts of various adversarial attacks on contextual bandit algorithms is essential. In this paper, we propose a new class of attacks: action poisoning attacks, where an adversary can change the action signal selected by the agent. We design action poisoning attack schemes against disjoint linear contextual bandit algorithms in both white-box and black-box settings. We further analyze the cost of the proposed attack strategies for a very popular and widely used bandit algorithm: LinUCB. We show that, in both white-box and black-box settings, the proposed attack schemes can force the LinUCB agent to pull a target arm very frequently by spending only logarithm cost. We also extend the proposed attack strategies to generalized linear models and show the effectiveness of the proposed strategies.

Cite

Text

Liu and Lai. "Action Poisoning Attacks on Linear Contextual Bandits." Transactions on Machine Learning Research, 2023.

Markdown

[Liu and Lai. "Action Poisoning Attacks on Linear Contextual Bandits." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/liu2023tmlr-action/)

BibTeX

@article{liu2023tmlr-action,
  title     = {{Action Poisoning Attacks on Linear Contextual Bandits}},
  author    = {Liu, Guanlin and Lai, Lifeng},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/liu2023tmlr-action/}
}