Langevin Soft Actor-Critic: Efficient Exploration Through Uncertainty-Driven Critic Learning

Abstract

Existing actor-critic algorithms, which are popular for continuous control reinforcement learning (RL) tasks, suffer from poor sample efficiency due to lack of principled exploration mechanism within them. Motivated by the success of Thompson sampling for efficient exploration in RL, we propose a novel model-free RL algorithm, \emph{Langevin Soft Actor Critic} (LSAC), which prioritizes enhancing critic learning through uncertainty estimation over policy optimization. LSAC employs three key innovations: approximate Thompson sampling through distributional Langevin Monte Carlo (LMC) based $Q$ updates, parallel tempering for exploring multiple modes of the posterior of the $Q$ function, and diffusion synthesized state-action samples regularized with $Q$ action gradients. Our extensive experiments demonstrate that LSAC outperforms or matches the performance of mainstream model-free RL algorithms for continuous control tasks. Notably, LSAC marks the first successful application of an LMC based Thompson sampling in continuous control tasks with continuous action spaces.

Cite

Text

Ishfaq et al. "Langevin Soft Actor-Critic: Efficient Exploration Through Uncertainty-Driven Critic Learning." International Conference on Learning Representations, 2025.

Markdown

[Ishfaq et al. "Langevin Soft Actor-Critic: Efficient Exploration Through Uncertainty-Driven Critic Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/ishfaq2025iclr-langevin/)

BibTeX

@inproceedings{ishfaq2025iclr-langevin,
  title     = {{Langevin Soft Actor-Critic: Efficient Exploration Through Uncertainty-Driven Critic Learning}},
  author    = {Ishfaq, Haque and Wang, Guangyuan and Islam, Sami Nur and Precup, Doina},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/ishfaq2025iclr-langevin/}
}