$q$-Exponential Family for Policy Optimization
Abstract
Policy optimization methods benefit from a simple and tractable policy parametrization, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the $q$-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies ($q>1$) and light-tailed policies ($q<1$). This paper examines the interplay between $q$-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed $q$-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. In summary, we find that the Student's t policy a strong candidate for drop-in replacement to the Gaussian. Our code is available at \url{https://github.com/lingweizhu/qexp}.
Cite
Text
Zhu et al. "$q$-Exponential Family for Policy Optimization." International Conference on Learning Representations, 2025.Markdown
[Zhu et al. "$q$-Exponential Family for Policy Optimization." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhu2025iclr-qexponential/)BibTeX
@inproceedings{zhu2025iclr-qexponential,
title = {{$q$-Exponential Family for Policy Optimization}},
author = {Zhu, Lingwei and Shah, Haseeb and Wang, Han and Nagai, Yukie and White, Martha},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhu2025iclr-qexponential/}
}