Curious iLQR: Resolving Uncertainty in Model-Based RL
Abstract
Curiosity as a means to explore during reinforcement learning problems has recently become very popular. However, very little progress has been made in utilizing curiosity for learning control. In this work, we propose a model-based reinforcement learning (MBRL) framework that combines Bayesian modeling of the system dynamics with curious iLQR , an iterative LQR approach that considers model uncertainty. During trajectory optimization the curious iLQR attempts to minimize both the task-dependent cost and the uncertainty in the dynamics model. We demonstrate the approach on reaching tasks with 7-DoF manipulators in simulation and on a real robot. Our experiments show that MBRL with curious iLQR reaches desired end-effector targets more reliably and with less system rollouts when learning a new task from scratch, and that the learned model generalizes better to new reaching tasks.
Cite
Text
Bechtle et al. "Curious iLQR: Resolving Uncertainty in Model-Based RL." Conference on Robot Learning, 2019.Markdown
[Bechtle et al. "Curious iLQR: Resolving Uncertainty in Model-Based RL." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/bechtle2019corl-curious/)BibTeX
@inproceedings{bechtle2019corl-curious,
title = {{Curious iLQR: Resolving Uncertainty in Model-Based RL}},
author = {Bechtle, Sarah and Lin, Yixin and Rai, Akshara and Righetti, Ludovic and Meier, Franziska},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {162-171},
volume = {100},
url = {https://mlanthology.org/corl/2019/bechtle2019corl-curious/}
}