Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search

Abstract

Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample inefficiency, necessitating tens of millions of program-environment interactions. To tackle this challenge, we introduce a novel LLM-guided search framework (LLM-GS). Our key insight is to leverage the programming expertise and common sense reasoning of LLMs to enhance the efficiency of assumption-free, random-guessing search methods. We address the challenge of LLMs' inability to generate precise and grammatically correct programs in domain-specific languages (DSLs) by proposing a Pythonic-DSL strategy — an LLM is instructed to initially generate Python codes and then convert them into DSL programs. To further optimize the LLM-generated programs, we develop a search algorithm named Scheduled Hill Climbing, designed to efficiently explore the programmatic search space to improve the programs consistently. Experimental results in the Karel domain demonstrate our LLM-GS framework's superior effectiveness and efficiency. Extensive ablation studies further verify the critical role of our Pythonic-DSL strategy and Scheduled Hill Climbing algorithm. Moreover, we conduct experiments with two novel tasks, showing that LLM-GS enables users without programming skills and knowledge of the domain or DSL to describe the tasks in natural language to obtain performant programs.

Cite

Text

Liu et al. "Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search." International Conference on Learning Representations, 2025.

Markdown

[Liu et al. "Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liu2025iclr-synthesizing/)

BibTeX

@inproceedings{liu2025iclr-synthesizing,
  title     = {{Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search}},
  author    = {Liu, Max and Yu, Chan-Hung and Lee, Wei-Hsu and Hung, Cheng-Wei and Chen, Yen-Chun and Sun, Shao-Hua},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/liu2025iclr-synthesizing/}
}