Rethinking Optimal Transport in Offline Reinforcement Learning
Abstract
We propose a novel algorithm for offline reinforcement learning using optimal transport. Typically, in offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal. To extract an efficient policy, it is necessary to \emph{stitch} the best behaviors from the dataset. To address this problem, we rethink offline reinforcement learning as an optimal transportation problem. And based on this, we present an algorithm that aims to find a policy that maps states to a \emph{partial} distribution of the best expert actions for each given state. We evaluate the performance of our algorithm on continuous control problems from the D4RL suite and demonstrate improvements over existing methods.
Cite
Text
Asadulaev et al. "Rethinking Optimal Transport in Offline Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-3929Markdown
[Asadulaev et al. "Rethinking Optimal Transport in Offline Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/asadulaev2024neurips-rethinking/) doi:10.52202/079017-3929BibTeX
@inproceedings{asadulaev2024neurips-rethinking,
title = {{Rethinking Optimal Transport in Offline Reinforcement Learning}},
author = {Asadulaev, Arip and Korst, Rostislav and Korotin, Alexander and Egiazarian, Vage and Filchenkov, Andrey and Burnaev, Evgeny},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3929},
url = {https://mlanthology.org/neurips/2024/asadulaev2024neurips-rethinking/}
}