CALE: Continuous Arcade Learning Environment

Abstract

We introduce the Continuous Arcade Learning Environment (CALE), an extension of the well-known Arcade Learning Environment (ALE) [Bellemare et al., 2013]. The CALE uses the same underlying emulator of the Atari 2600 gaming system (Stella), but adds support for continuous actions. This enables the benchmarking and evaluation of continuous-control agents (such as PPO [Schulman et al., 2017] and SAC [Haarnoja et al., 2018]) and value-based agents (such as DQN [Mnih et al., 2015] and Rainbow [Hessel et al., 2018]) on the same environment suite. We provide a series of open questions and research directions that CALE enables, as well as initial baseline results using Soft Actor-Critic. CALE is available as part of the ALE athttps://github.com/Farama-Foundation/Arcade-Learning-Environment.

Cite

Text

Farebrother and Castro. "CALE: Continuous Arcade Learning Environment." Neural Information Processing Systems, 2024. doi:10.52202/079017-4288

Markdown

[Farebrother and Castro. "CALE: Continuous Arcade Learning Environment." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/farebrother2024neurips-cale/) doi:10.52202/079017-4288

BibTeX

@inproceedings{farebrother2024neurips-cale,
  title     = {{CALE: Continuous Arcade Learning Environment}},
  author    = {Farebrother, Jesse and Castro, Pablo Samuel},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4288},
  url       = {https://mlanthology.org/neurips/2024/farebrother2024neurips-cale/}
}