Policy Gradient with Tree Expansion
Abstract
Policy gradient methods are notorious for having a large variance and high sample complexity. To mitigate this, we introduce SoftTreeMax—a generalization of softmax that employs planning. In SoftTreeMax, we extend the traditional logits with the multi-step discounted cumulative reward, topped with the logits of future states. We analyze SoftTreeMax and explain how tree expansion helps to reduce its gradient variance. We prove that the variance depends on the chosen tree-expansion policy. Specifically, we show that the closer the induced transitions are to being state-independent, the stronger the variance decay. With approximate forward models, we prove that the resulting gradient bias diminishes with the approximation error while retaining the same variance reduction. Ours is the first result to bound the gradient bias for an approximate model. In a practical implementation of SoftTreeMax we utilize a parallel GPU-based simulator for fast and efficient tree expansion. Using this implementation in Atari, we show that SoftTreeMax reduces the gradient variance by three orders of magnitude. This leads to better sample complexity and improved performance compared to distributed PPO.
Cite
Text
Dalal et al. "Policy Gradient with Tree Expansion." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Dalal et al. "Policy Gradient with Tree Expansion." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/dalal2025icml-policy/)BibTeX
@inproceedings{dalal2025icml-policy,
title = {{Policy Gradient with Tree Expansion}},
author = {Dalal, Gal and Hallak, Assaf and Thoppe, Gugan and Mannor, Shie and Chechik, Gal},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {12229-12255},
volume = {267},
url = {https://mlanthology.org/icml/2025/dalal2025icml-policy/}
}