ERFSL: An Efficient Reward Function Searcher via Large Language Models for Custom-Environment Multi-Objective Reinforcement Learning (Student Abstract)
Abstract
We propose ERFSL, an efficient reward function searcher using large language models (LLMs) for custom-environment, multi-objective reinforcement learning (RL). ERFSL generates reward components based on explicit user requirements and rectifies them, and iteratively optimizes the weights of these components based on textual context. Applied to an underwater data collection RL task, ERFSL corrects reward codes with only one feedback iteration per requirement, and acquires diverse reward functions within the Pareto set. ERFSL also presents robust capability for deviated weights and small-size LLMs such as GPT-4o mini. The full-text prompts, examples of LLM-generated answers, and source code are available at https://360zmem.github.io/LLMRsearcher/ .
Cite
Text
Xie et al. "ERFSL: An Efficient Reward Function Searcher via Large Language Models for Custom-Environment Multi-Objective Reinforcement Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35316Markdown
[Xie et al. "ERFSL: An Efficient Reward Function Searcher via Large Language Models for Custom-Environment Multi-Objective Reinforcement Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/xie2025aaai-erfsl/) doi:10.1609/AAAI.V39I28.35316BibTeX
@inproceedings{xie2025aaai-erfsl,
title = {{ERFSL: An Efficient Reward Function Searcher via Large Language Models for Custom-Environment Multi-Objective Reinforcement Learning (Student Abstract)}},
author = {Xie, Guanwen and Xu, Jingzehua and Yang, Yiyuan and Ding, Yimian and Zhang, Shuai},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29535-29537},
doi = {10.1609/AAAI.V39I28.35316},
url = {https://mlanthology.org/aaai/2025/xie2025aaai-erfsl/}
}