First Order Constrained Optimization in Policy Space

Abstract

In reinforcement learning, an agent attempts to learn high-performing behaviors through interacting with the environment, such behaviors are often quantified in the form of a reward function. However some aspects of behavior—such as ones which are deemed unsafe and to be avoided—are best captured through constraints. We propose a novel approach called First Order Constrained Optimization in Policy Space (FOCOPS) which maximizes an agent's overall reward while ensuring the agent satisfies a set of cost constraints. Using data generated from the current policy, FOCOPS first finds the optimal update policy by solving a constrained optimization problem in the nonparameterized policy space. FOCOPS then projects the update policy back into the parametric policy space. Our approach has an approximate upper bound for worst-case constraint violation throughout training and is first-order in nature therefore simple to implement. We provide empirical evidence that our simple approach achieves better performance on a set of constrained robotics locomotive tasks.

Cite

Text

Zhang et al. "First Order Constrained Optimization in Policy Space." Neural Information Processing Systems, 2020.

Markdown

[Zhang et al. "First Order Constrained Optimization in Policy Space." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zhang2020neurips-first/)

BibTeX

@inproceedings{zhang2020neurips-first,
  title     = {{First Order Constrained Optimization in Policy Space}},
  author    = {Zhang, Yiming and Vuong, Quan and Ross, Keith},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/zhang2020neurips-first/}
}