Adversarial Poisoning Attacks on Reinforcement Learning-Driven Energy Pricing
Abstract
Reinforcement learning (RL) has emerged as a strong candidate for implementing complex controls in energy systems, such as energy pricing in microgrids. But what happens when some of the microgrid controllers are compromised by a malicious entity? We demonstrate a novel attack in RL. Our attack perturbs each trajectory to reverse the direction of the estimated gradient. We demonstrate that if data from a small fraction of microgrid controllers is adversarially perturbed, the learning of the RL agent can be significantly slowed or (with larger perturbations) caused to operate at a loss. Prosumers also face higher energy costs, use their batteries less, and suffer from higher peak demand when the pricing aggregator is adversarially poisoned. We address this vulnerability with a “defense” module; i.e., a ``robustification'' of RL algorithms against this attack. Our defense identifies the trajectories with the largest influence on the gradient and removes them from the training data.
Cite
Text
Gunn et al. "Adversarial Poisoning Attacks on Reinforcement Learning-Driven Energy Pricing." NeurIPS 2022 Workshops: MLSW, 2022.Markdown
[Gunn et al. "Adversarial Poisoning Attacks on Reinforcement Learning-Driven Energy Pricing." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/gunn2022neuripsw-adversarial/)BibTeX
@inproceedings{gunn2022neuripsw-adversarial,
title = {{Adversarial Poisoning Attacks on Reinforcement Learning-Driven Energy Pricing}},
author = {Gunn, Sam and Jang, Doseok and Paradise, Orr and Spangher, Lucas and Spanos, Costas},
booktitle = {NeurIPS 2022 Workshops: MLSW},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/gunn2022neuripsw-adversarial/}
}