Intelligent Switching for Reset-Free RL
Abstract
In the real world, the strong episode resetting mechanisms that are needed to train agents in simulation are unavailable. The resetting assumption limits the potential of reinforcement learning in the real world, as providing resets to an agent usually requires the creation of additional handcrafted mechanisms or human interventions. Recent work aims to train agents (forward) with learned resets by constructing a second (backward) agent that returns the forward agent to the initial state. We find that the termination and timing of the transitions between these two agents are crucial for algorithm success. With this in mind, we create a new algorithm, Reset Free RL with Intelligently Switching Controller (RISC) which intelligently switches between the two agents based on the agent’s confidence in achieving its current goal. Our new method achieves state-of-the-art performance on several challenging environments for reset-free RL.
Cite
Text
Patil et al. "Intelligent Switching for Reset-Free RL." International Conference on Learning Representations, 2024.Markdown
[Patil et al. "Intelligent Switching for Reset-Free RL." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/patil2024iclr-intelligent/)BibTeX
@inproceedings{patil2024iclr-intelligent,
title = {{Intelligent Switching for Reset-Free RL}},
author = {Patil, Darshan and Rajendran, Janarthanan and Berseth, Glen and Chandar, Sarath},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/patil2024iclr-intelligent/}
}