DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning
Abstract
We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates. We model episode sessions---parts of the episode where the latent state is fixed---and propose three key modifications to existing meta-RL methods: (i) consistency of latent information within sessions, (ii) session masking, and (iii) prior latent conditioning. We demonstrate the importance of these modifications in various domains, ranging from discrete Gridworld environments to continuous-control and simulated robot assistive tasks, illustrating the efficacy of DynaMITE-RL over state-of-the-art baselines in both online and offline RL settings.
Cite
Text
Liang et al. "DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-4490Markdown
[Liang et al. "DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/liang2024neurips-dynamiterl/) doi:10.52202/079017-4490BibTeX
@inproceedings{liang2024neurips-dynamiterl,
title = {{DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning}},
author = {Liang, Anthony and Tennenholtz, Guy and Hsu, Chih-Wei and Chow, Yinlam and Biyik, Erdem and Boutilier, Craig},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4490},
url = {https://mlanthology.org/neurips/2024/liang2024neurips-dynamiterl/}
}