Concave Utility Reinforcement Learning with Zero-Constraint Violations

Abstract

We consider the problem of tabular infinite horizon concave utility reinforcement learning (CURL) with convex constraints. For this, we propose a model-based learning algorithm that also achieves zero constraint violations. Assuming that the concave objective and the convex constraints have a solution interior to the set of feasible occupation measures, we solve a tighter optimization problem to ensure that the constraints are never violated despite the imprecise model knowledge and model stochasticity. We use Bellman error-based analysis for tabular infinite-horizon setups which allows analyzing stochastic policies. Combining the Bellman error-based analysis and tighter optimization equation, for $T$ interactions with the environment, we obtain a high-probability regret guarantee for objective which grows as $\Tilde{O}(1/\sqrt{T})$, excluding other factors. The proposed method can be applied for optimistic algorithms to obtain high-probability regret bounds and also be used for posterior sampling algorithms to obtain a loose Bayesian regret bounds but with significant improvement in computational complexity.

Cite

Text

Agarwal et al. "Concave Utility Reinforcement Learning with Zero-Constraint Violations." Transactions on Machine Learning Research, 2022.

Markdown

[Agarwal et al. "Concave Utility Reinforcement Learning with Zero-Constraint Violations." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/agarwal2022tmlr-concave/)

BibTeX

@article{agarwal2022tmlr-concave,
  title     = {{Concave Utility Reinforcement Learning with Zero-Constraint Violations}},
  author    = {Agarwal, Mridul and Bai, Qinbo and Aggarwal, Vaneet},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/agarwal2022tmlr-concave/}
}