Learning to Design Games: Strategic Environments in Reinforcement Learning

Abstract

In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the real-world, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.

Cite

Text

Zhang et al. "Learning to Design Games: Strategic Environments in Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/426

Markdown

[Zhang et al. "Learning to Design Games: Strategic Environments in Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/zhang2018ijcai-learning/) doi:10.24963/IJCAI.2018/426

BibTeX

@inproceedings{zhang2018ijcai-learning,
  title     = {{Learning to Design Games: Strategic Environments in Reinforcement Learning}},
  author    = {Zhang, Haifeng and Wang, Jun and Zhou, Zhiming and Zhang, Weinan and Wen, Yin and Yu, Yong and Li, Wenxin},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3068-3074},
  doi       = {10.24963/IJCAI.2018/426},
  url       = {https://mlanthology.org/ijcai/2018/zhang2018ijcai-learning/}
}