Contextualized Hybrid Ensemble Q-Learning: Learning Fast with Control Priors

Abstract

Combining Reinforcement Learning (RL) with a prior controller can yield the best out of two worlds: RL can solve complex nonlinear problems, while the control prior ensures safer exploration and speeds up training. Prior work largely blends both components with a fixed weight, neglecting that the RL agent's performance varies with the training progress and across regions in the state space. Therefore, we advocate for an adaptive strategy that dynamically adjusts the weighting based on the RL agent's current capabilities. We propose a new adaptive hybrid RL algorithm, Contextualized Hybrid Ensemble Q-learning (CHEQ). CHEQ has three key ingredients: (i) a time-invariant formulation of the adaptive hybrid RL problem treating the adaptive weight as a context variable, (ii) a weight adaption mechanism based on the parametric uncertainty of a critic ensemble, and (iii) ensemble-based acceleration for data-efficient RL. Evaluating CHEQ on a car racing task reveals substantially stronger data efficiency, exploration safety, and transferability to unknown scenarios than state-of-the-art adaptive hybrid RL methods.

Cite

Text

Cramer et al. "Contextualized Hybrid Ensemble Q-Learning: Learning Fast with Control Priors." ICML 2024 Workshops: ARLET, 2024.

Markdown

[Cramer et al. "Contextualized Hybrid Ensemble Q-Learning: Learning Fast with Control Priors." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/cramer2024icmlw-contextualized/)

BibTeX

@inproceedings{cramer2024icmlw-contextualized,
  title     = {{Contextualized Hybrid Ensemble Q-Learning: Learning Fast with Control Priors}},
  author    = {Cramer, Emma and Frauenknecht, Bernd and Sabirov, Ramil and Trimpe, Sebastian},
  booktitle = {ICML 2024 Workshops: ARLET},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/cramer2024icmlw-contextualized/}
}