Using Unity to Help Solve Reinforcement Learning

Abstract

Leveraging the depth and flexibility of XLand as well as the rapid prototyping features of the Unity engine, we present the United Unity Universe — an open-source toolkit designed to accelerate the creation of innovative reinforcement learning environments. This toolkit includes a robust implementation of XLand 2.0 complemented by a user-friendly interface which allows users to modify the details of procedurally generated terrains and task rules with ease. Additionally, we provide a curated selection of terrains and rule sets, accompanied by implementations of reinforcement learning baselines to facilitate quick experimentation with novel architectural designs for adaptive agents. Furthermore, we illustrate how the United Unity Universe serves as a high-level language that enables researchers to develop diverse and endlessly variable 3D environments within a unified framework. This functionality establishes the United Unity Universe (U3) as an essential tool for advancing the field of reinforcement learning, especially in the development of adaptive and generalizable learning systems.

Cite

Text

Brennan et al. "Using Unity to Help Solve Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-0081

Markdown

[Brennan et al. "Using Unity to Help Solve Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/brennan2024neurips-using/) doi:10.52202/079017-0081

BibTeX

@inproceedings{brennan2024neurips-using,
  title     = {{Using Unity to Help Solve Reinforcement Learning}},
  author    = {Brennan, Connor and Williams, Andrew Robert and Younis, Omar G. and Vyas, Vedant and Yasafova, Daria and Rish, Irina},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0081},
  url       = {https://mlanthology.org/neurips/2024/brennan2024neurips-using/}
}