Cluster-Based Sampling in Hindsight Experience Replay for Robotic Tasks (Student Abstract)

Abstract

In multi-goal reinforcement learning with a sparse binary reward, training agents is particularly challenging, due to a lack of successful experiences. To solve this problem, hindsight experience replay (HER) generates successful experiences even from unsuccessful ones. However, generating successful experiences from uniformly sampled ones is not an efficient process. In this paper, the impact of exploiting the property of achieved goals in generating successful experiences is investigated and a novel cluster-based sampling strategy is proposed. The proposed sampling strategy groups episodes with different achieved goals by using a cluster model and samples experiences in the manner of HER to create the training batch. The proposed method is validated by experiments with three robotic control tasks of the OpenAI Gym. The results of experiments demonstrate that the proposed method is substantially sample efficient and achieves better performance than baseline approaches.

Cite

Text

Kim and Har. "Cluster-Based Sampling in Hindsight Experience Replay for Robotic Tasks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30465

Markdown

[Kim and Har. "Cluster-Based Sampling in Hindsight Experience Replay for Robotic Tasks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kim2024aaai-cluster/) doi:10.1609/AAAI.V38I21.30465

BibTeX

@inproceedings{kim2024aaai-cluster,
  title     = {{Cluster-Based Sampling in Hindsight Experience Replay for Robotic Tasks (Student Abstract)}},
  author    = {Kim, Taeyoung and Har, Dongsoo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23544-23545},
  doi       = {10.1609/AAAI.V38I21.30465},
  url       = {https://mlanthology.org/aaai/2024/kim2024aaai-cluster/}
}