Data Poisoning Attacks Against Autoregressive Models

Abstract

Forecasting models play a key role in money-making ventures in many different markets. Such models are often trained on data from various sources, some of which may be untrustworthy.An actor in a given market may be incentivised to drive predictions in a certain direction to their own benefit.Prior analyses of intelligent adversaries in a machine-learning context have focused on regression and classification.In this paper we address the non-iid setting of time series forecasting.We consider a forecaster, Bob, using a fixed, known model and a recursive forecasting method.An adversary, Alice, aims to pull Bob's forecasts toward her desired target series, and may exercise limited influence on the initial values fed into Bob's model.We consider the class of linear autoregressive models, and a flexible framework of encoding Alice's desires and constraints.We describe a method of calculating Alice's optimal attack that is computationally tractable, and empirically demonstrate its effectiveness compared to random and greedy baselines on synthetic and real-world time series data.We conclude by discussing defensive strategies in the face of Alice-like adversaries.

Cite

Text

Alfeld et al. "Data Poisoning Attacks Against Autoregressive Models." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10237

Markdown

[Alfeld et al. "Data Poisoning Attacks Against Autoregressive Models." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/alfeld2016aaai-data/) doi:10.1609/AAAI.V30I1.10237

BibTeX

@inproceedings{alfeld2016aaai-data,
  title     = {{Data Poisoning Attacks Against Autoregressive Models}},
  author    = {Alfeld, Scott and Zhu, Xiaojin and Barford, Paul},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {1452-1458},
  doi       = {10.1609/AAAI.V30I1.10237},
  url       = {https://mlanthology.org/aaai/2016/alfeld2016aaai-data/}
}