RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation

Abstract

Deep reinforcement learning (DRL) is playing an increasingly important role in real-world applications. However, obtaining an optimally performing DRL agent for complex tasks, especially with sparse rewards, remains a significant challenge. The training of a DRL agent can be often trapped in a bottleneck without further progress. In this paper, we propose RICE, an innovative refining scheme for reinforcement learning that incorporates explanation methods to break through the training bottlenecks. The high-level idea of RICE is to construct a new initial state distribution that combines both the default initial states and critical states identified through explanation methods, thereby encouraging the agent to explore from the mixed initial states. Through careful design, we can theoretically guarantee that our refining scheme has a tighter sub-optimality bound. We evaluate RICE in various popular RL environments and real-world applications. The results demonstrate that RICE significantly outperforms existing refining schemes in enhancing agent performance.

Cite

Text

Cheng et al. "RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation." International Conference on Machine Learning, 2024.

Markdown

[Cheng et al. "RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cheng2024icml-rice/)

BibTeX

@inproceedings{cheng2024icml-rice,
  title     = {{RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation}},
  author    = {Cheng, Zelei and Wu, Xian and Yu, Jiahao and Yang, Sabrina and Wang, Gang and Xing, Xinyu},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {8203-8228},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/cheng2024icml-rice/}
}