Curious iLQR: Resolving Uncertainty in Model-Based RL

Abstract

Curiosity as a means to explore during reinforcement learning problems has recently become very popular. However, very little progress has been made in utilizing curiosity for learning control. In this work, we propose a model-based reinforcement learning (MBRL) framework that combines Bayesian modeling of the system dynamics with curious iLQR, a risk-seeking iterative LQR approach. During trajectory optimization the cu-rious iLQR attempts to minimize both the task- dependent cost and the uncertainty in the dynamics model. We scale this approach to perform reaching tasks on 7-DoF manipulators, to perform both simulation and real robot reaching experiments. Our experiments consistently show that MBRL with curious iLQR more easily overcomes bad initial dynamics models and reaches desired joint configurations more reliably and with less system rollouts.

Cite

Text

Bechtle et al. "Curious iLQR: Resolving Uncertainty in Model-Based RL." ICML 2019 Workshops: RL4RealLife, 2019.

Markdown

[Bechtle et al. "Curious iLQR: Resolving Uncertainty in Model-Based RL." ICML 2019 Workshops: RL4RealLife, 2019.](https://mlanthology.org/icmlw/2019/bechtle2019icmlw-curious/)

BibTeX

@inproceedings{bechtle2019icmlw-curious,
  title     = {{Curious iLQR: Resolving Uncertainty in Model-Based RL}},
  author    = {Bechtle, Sarah and Rai, Akshara and Lin, Yixin and Righetti, Ludovic and Meier, Franziska},
  booktitle = {ICML 2019 Workshops: RL4RealLife},
  year      = {2019},
  url       = {https://mlanthology.org/icmlw/2019/bechtle2019icmlw-curious/}
}