SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems
Abstract
The lack of standardized benchmarks for reinforcement learning (RL) in sustainability applications has made it difficult to both track progress on specific domains and identify bottlenecks for researchers to focus their efforts. In this paper, we present SustainGym, a suite of five environments designed to test the performance of RL algorithms on realistic sustainable energy system tasks, ranging from electric vehicle charging to carbon-aware data center job scheduling. The environments test RL algorithms under realistic distribution shifts as well as in multi-agent settings. We show that standard off-the-shelf RL algorithms leave significant room for improving performance and highlight the challenges ahead for introducing RL to real-world sustainability tasks.
Cite
Text
Yeh et al. "SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems." Neural Information Processing Systems, 2023.Markdown
[Yeh et al. "SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/yeh2023neurips-sustaingym/)BibTeX
@inproceedings{yeh2023neurips-sustaingym,
title = {{SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems}},
author = {Yeh, Christopher and Li, Victor and Datta, Rajeev and Arroyo, Julio and Christianson, Nicolas and Zhang, Chi and Chen, Yize and Hosseini, Mohammad Mehdi and Golmohammadi, Azarang and Shi, Yuanyuan and Yue, Yisong and Wierman, Adam},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/yeh2023neurips-sustaingym/}
}