Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration

Abstract

This paper studies regret minimization with randomized value functions in reinforcement learning. In tabular finite-horizon Markov Decision Processes, we introduce a clipping variant of one classical Thompson Sampling (TS)-like algorithm, randomized least-squares value iteration (RLSVI). Our $\tilde{\mathrm{O}}(H^2S\sqrt{AT})$ high-probability worst-case regret bound improves the previous sharpest worst-case regret bounds for RLSVI and matches the existing state-of-the-art worst-case TS-based regret bounds.

Cite

Text

Agrawal et al. "Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16813

Markdown

[Agrawal et al. "Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/agrawal2021aaai-improved/) doi:10.1609/AAAI.V35I8.16813

BibTeX

@inproceedings{agrawal2021aaai-improved,
  title     = {{Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration}},
  author    = {Agrawal, Priyank and Chen, Jinglin and Jiang, Nan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {6566-6573},
  doi       = {10.1609/AAAI.V35I8.16813},
  url       = {https://mlanthology.org/aaai/2021/agrawal2021aaai-improved/}
}