Continuous Q-Score Matching: Diffusion Guided Reinforcement Learning for Continuous-Time Control
Abstract
Reinforcement learning (RL) has achieved significant success across a wide range of domains, however, most existing methods are formulated in discrete time. In this work, we introduce a novel RL method for continuous-time control, where stochastic differential equations govern state-action dynamics. Departing from traditional value function-based approaches, our key contribution is the characterization of continuous-time Q-functions via a martingale condition and the linking of diffusion policy scores to the action gradient of a learned continuous Q-function by the dynamic programming principle. This insight motivates Continuous Q-Score Matching (CQSM), a score-based policy improvement algorithm. Notably, our method addresses a long-standing challenge in continuous-time RL: preserving the action-evaluation capability of Q-functions without relying on time discretization. We further provide theoretical closed-form solutions for linear-quadratic (LQ) control problems within our framework. Numerical results in simulated environments demonstrate the effectiveness of our proposed method and compare it to popular baselines.
Cite
Text
Hua et al. "Continuous Q-Score Matching: Diffusion Guided Reinforcement Learning for Continuous-Time Control." Advances in Neural Information Processing Systems, 2025.Markdown
[Hua et al. "Continuous Q-Score Matching: Diffusion Guided Reinforcement Learning for Continuous-Time Control." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/hua2025neurips-continuous/)BibTeX
@inproceedings{hua2025neurips-continuous,
title = {{Continuous Q-Score Matching: Diffusion Guided Reinforcement Learning for Continuous-Time Control}},
author = {Hua, Chengxiu and Gu, Jiawen and Tang, Yushun},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/hua2025neurips-continuous/}
}