Grounded Action Transformation for Robot Learning in Simulation
Abstract
Robot learning in simulation is a promising alternative to the prohibitive sample cost of learning in the physical world. Unfortunately, policies learned in simulation often perform worse than hand-coded policies when applied on the physical robot. This paper proposes a new algorithm for learning in simulation — Grounded Action Transformation — and applies it to learning of humanoid bipedal locomotion. Our approach results in a 43.27% improvement in forward walk velocity compared to a state-of-the art hand-coded walk.
Cite
Text
Hanna and Stone. "Grounded Action Transformation for Robot Learning in Simulation." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11044Markdown
[Hanna and Stone. "Grounded Action Transformation for Robot Learning in Simulation." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/hanna2017aaai-grounded/) doi:10.1609/AAAI.V31I1.11044BibTeX
@inproceedings{hanna2017aaai-grounded,
title = {{Grounded Action Transformation for Robot Learning in Simulation}},
author = {Hanna, Josiah P. and Stone, Peter},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {3834-3840},
doi = {10.1609/AAAI.V31I1.11044},
url = {https://mlanthology.org/aaai/2017/hanna2017aaai-grounded/}
}