EMPO: A Clustering-Based On-Policy Algorithm for Offline Reinforcement Learing

Abstract

We propose an on-policy algorithm, Expectation–Maximization Policy Optimization (EMPO), for offline reinforcement learning that leverages an EM-based clustering algorithm to recover the behaviour policies used to generate the dataset. By improving each behaviour policy via proximal policy optimization and learning a high-level policy that chooses the optimal cluster at each step, EMPO outperforms existing offline RL algorithms on multiple benchmarks.

Cite

Text

Park et al. "EMPO: A Clustering-Based On-Policy Algorithm for Offline Reinforcement Learing." ICML 2024 Workshops: ARLET, 2024.

Markdown

[Park et al. "EMPO: A Clustering-Based On-Policy Algorithm for Offline Reinforcement Learing." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/park2024icmlw-empo/)

BibTeX

@inproceedings{park2024icmlw-empo,
  title     = {{EMPO: A Clustering-Based On-Policy Algorithm for Offline Reinforcement Learing}},
  author    = {Park, Jongeui and Cho, Myungsik and Sung, Youngchul},
  booktitle = {ICML 2024 Workshops: ARLET},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/park2024icmlw-empo/}
}