ARCLE: The Abstraction and Reasoning Corpus Learning Environment for Reinforcement Learning
Abstract
This paper introduces ARCLE, an environment designed to facilitate reinforcement learning research on the Abstraction and Reasoning Corpus (ARC). Addressing this inductive reasoning benchmark with reinforcement learning presents these challenges: a vast action space, a hard-to-reach goal, and a variety of tasks. We demonstrate that an agent with proximal policy optimization can learn individual tasks through ARCLE. The adoption of non-factorial policies and auxiliary losses led to performance enhancements, effectively mitigating issues associated with action spaces and goal attainment. Based on these insights, we propose several research directions and motivations for using ARCLE, including MAML, GFlowNets, and World Models.
Cite
Text
Lee et al. "ARCLE: The Abstraction and Reasoning Corpus Learning Environment for Reinforcement Learning." Proceedings of The 3rd Conference on Lifelong Learning Agents, 2024.Markdown
[Lee et al. "ARCLE: The Abstraction and Reasoning Corpus Learning Environment for Reinforcement Learning." Proceedings of The 3rd Conference on Lifelong Learning Agents, 2024.](https://mlanthology.org/collas/2024/lee2024collas-arcle/)BibTeX
@inproceedings{lee2024collas-arcle,
title = {{ARCLE: The Abstraction and Reasoning Corpus Learning Environment for Reinforcement Learning}},
author = {Lee, Hosung and Kim, Sejin and Lee, Seungpil and Hwang, Sanha and Lee, Jihwan and Lee, Byung-Jun and Kim, Sundong},
booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents},
year = {2024},
pages = {710-731},
volume = {274},
url = {https://mlanthology.org/collas/2024/lee2024collas-arcle/}
}