DynaAct: Large Language Model Reasoning with Dynamic Action Spaces

Abstract

In modern sequential decision-making systems, the construction of an optimal candidate action space is critical to efficient inference. However, existing approaches either rely on manually defined action spaces that lack scalability or utilize unstructured spaces that render exhaustive search computationally prohibitive. In this paper, we propose a novel framework named \textsc{DynaAct} for automatically constructing a compact action space to enhance sequential reasoning in complex problem-solving scenarios. Our method first estimates a proxy for the complete action space by extracting general sketches observed in a corpus covering diverse complex reasoning problems using large language models. We then formulate a submodular function that jointly evaluates candidate actions based on their utility to the current state and their diversity, and employ a greedy algorithm to select an optimal candidate set. Extensive experiments on six diverse standard benchmarks demonstrate that our approach significantly improves overall performance, while maintaining efficient inference without introducing substantial latency. The implementation is available at \url{https://github.com/zhaoxlpku/DynaAct}.

Cite

Text

Zhao et al. "DynaAct: Large Language Model Reasoning with Dynamic Action Spaces." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhao et al. "DynaAct: Large Language Model Reasoning with Dynamic Action Spaces." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhao2025neurips-dynaact/)

BibTeX

@inproceedings{zhao2025neurips-dynaact,
  title     = {{DynaAct: Large Language Model Reasoning with Dynamic Action Spaces}},
  author    = {Zhao, Xueliang and Wu, Wei and Guan, Jian and Li, Qintong and Kong, Lingpeng},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhao2025neurips-dynaact/}
}