The In-Sample SoftMax for Offline Reinforcement Learning
Abstract
Reinforcement learning (RL) agents can leverage batches of previously collected data to extract a reasonable control policy. An emerging issue in this offline RL setting, however, is that the bootstrapping update underlying many of our methods suffers from insufficient action-coverage: standard max operator may select a maximal action that has not been seen in the dataset. Bootstrapping from these inaccurate values can lead to overestimation and even divergence. There are a growing number of methods that attempt to approximate an in-sample max, that only uses actions well-covered by the dataset. We highlight a simple fact: it is more straightforward to approximate an in-sample softmax using only actions in the dataset. We show that policy iteration based on the in-sample softmax converges, and that for decreasing temperatures it approaches the in-sample max. We derive an In-Sample Actor-Critic (AC), using this in-sample softmax, and show that it is consistently better or comparable to existing offline RL methods, and is also well-suited to fine-tuning. We release the code at github.com/hwang-ua/inac_pytorch.
Cite
Text
Xiao et al. "The In-Sample SoftMax for Offline Reinforcement Learning." International Conference on Learning Representations, 2023.Markdown
[Xiao et al. "The In-Sample SoftMax for Offline Reinforcement Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/xiao2023iclr-insample/)BibTeX
@inproceedings{xiao2023iclr-insample,
title = {{The In-Sample SoftMax for Offline Reinforcement Learning}},
author = {Xiao, Chenjun and Wang, Han and Pan, Yangchen and White, Adam and White, Martha},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/xiao2023iclr-insample/}
}