Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning
Abstract
Motivated by real-world settings where data collection and policy deployment—whether for a single agent or across multiple agents—are costly, we study the problem of on-policy single-agent reinforcement learning (RL) and federated RL (FRL) with a focus on minimizing burn-in costs (the sample sizes needed to reach near-optimal regret) and policy switching or communication costs. In parallel finite-horizon episodic Markov Decision Processes (MDPs) with $S$ states and $A$ actions, existing methods either require superlinear burn-in costs in $S$ and $A$ or fail to achieve logarithmic switching or communication costs. We propose two novel model-free RL algorithms—Q-EarlySettled-LowCost and FedQ-EarlySettled-LowCost—that are the first in the literature to simultaneously achieve: (i) the best near-optimal regret among all known model-free RL or FRL algorithms, (ii) low burn-in cost that scales linearly with $S$ and $A$, and (iii) logarithmic policy switching cost for single-agent RL or communication cost for FRL. Additionally, we establish gap-dependent theoretical guarantees for both regret and switching/communication costs, improving or matching the best-known gap-dependent bounds.
Cite
Text
Zhang et al. "Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhang et al. "Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-regretoptimal/)BibTeX
@inproceedings{zhang2025neurips-regretoptimal,
title = {{Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning}},
author = {Zhang, Haochen and Zheng, Zhong and Xue, Lingzhou},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhang2025neurips-regretoptimal/}
}