Critic Regularized Regression

Abstract

Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.

Cite

Text

Wang et al. "Critic Regularized Regression." Neural Information Processing Systems, 2020.

Markdown

[Wang et al. "Critic Regularized Regression." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/wang2020neurips-critic/)

BibTeX

@inproceedings{wang2020neurips-critic,
  title     = {{Critic Regularized Regression}},
  author    = {Wang, Ziyu and Novikov, Alexander and Zolna, Konrad and Merel, Josh S and Springenberg, Jost Tobias and Reed, Scott E and Shahriari, Bobak and Siegel, Noah and Gulcehre, Caglar and Heess, Nicolas and de Freitas, Nando},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/wang2020neurips-critic/}
}