Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Abstract
This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.
Cite
Text
Liu et al. "Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms." International Conference on Learning Representations, 2023.Markdown
[Liu et al. "Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/liu2023iclr-robust-a/)BibTeX
@inproceedings{liu2023iclr-robust-a,
title = {{Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms}},
author = {Liu, Linbo and Park, Youngsuk and Hoang, Trong Nghia and Hasson, Hilaf and Huan, Luke},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/liu2023iclr-robust-a/}
}