${lil}$Gym: Natural Language Visual Reasoning with Reinforcement Learning
Abstract
We present ${lil}$Gym, a new benchmark for language-conditioned reinforcement learning in visual environments. ${lil}$Gym is based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. We annotate all statements with executable Python programs representing their meaning to enable exact reward computation in every possible world state. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty. We experiment with ${lil}$Gym with different models and learning regimes. Our results and analysis show that while existing methods are able to achieve non-trivial performance, ${lil}$Gym forms a challenging open problem. ${lil}$Gym is available at https://lil.nlp.cornell.edu/lilgym/.
Cite
Text
Wu et al. "${lil}$Gym: Natural Language Visual Reasoning with Reinforcement Learning." NeurIPS 2022 Workshops: LaReL, 2022.Markdown
[Wu et al. "${lil}$Gym: Natural Language Visual Reasoning with Reinforcement Learning." NeurIPS 2022 Workshops: LaReL, 2022.](https://mlanthology.org/neuripsw/2022/wu2022neuripsw-lilgym/)BibTeX
@inproceedings{wu2022neuripsw-lilgym,
title = {{${lil}$Gym: Natural Language Visual Reasoning with Reinforcement Learning}},
author = {Wu, Anne and Brantley, Kianté and Kojima, Noriyuki and Artzi, Yoav},
booktitle = {NeurIPS 2022 Workshops: LaReL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/wu2022neuripsw-lilgym/}
}