Recurrent Deep Multiagent Q-Learning for Autonomous Brokers in Smart Grid
Abstract
The broker mechanism is widely applied to serve for interested parties to derive long-term policies in order to reduce costs or gain profits in smart grid. However, a broker is faced with a number of challenging problems such as balancing demand and supply from customers and competing with other coexisting brokers to maximize its profit. In this paper, we develop an effective pricing strategy for brokers in local electricity retail market based on recurrent deep multiagent reinforcement learning and sequential clustering. We use real household electricity consumption data to simulate the retail market for evaluating our strategy. The experiments demonstrate the superior performance of the proposed pricing strategy and highlight the effectiveness of our reward shaping mechanism.
Cite
Text
Yang et al. "Recurrent Deep Multiagent Q-Learning for Autonomous Brokers in Smart Grid." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/79Markdown
[Yang et al. "Recurrent Deep Multiagent Q-Learning for Autonomous Brokers in Smart Grid." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/yang2018ijcai-recurrent/) doi:10.24963/IJCAI.2018/79BibTeX
@inproceedings{yang2018ijcai-recurrent,
title = {{Recurrent Deep Multiagent Q-Learning for Autonomous Brokers in Smart Grid}},
author = {Yang, Yaodong and Hao, Jianye and Sun, Mingyang and Wang, Zan and Fan, Changjie and Strbac, Goran},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {569-575},
doi = {10.24963/IJCAI.2018/79},
url = {https://mlanthology.org/ijcai/2018/yang2018ijcai-recurrent/}
}