Legible Robot Motion from Conditional Generative Models
Abstract
In human robot collaboration, legible motion that clearly conveys its intentions and goals is essential. This is because forecasting a robot’s next move can lead to an improved user experience, safety, and task efficiency. Current methods for generating legible motion utilize hand designed cost functions and classical motion planners, but there is need for data driven policies that are trained end-to-end on demonstration data. In this paper we propose Generative Legible Motion Models (GLMM), a framework that utilizes conditional generative models to learn legible trajectories from human demonstrations. We find that GLMM produces motion that is 76% more legible than standard goal conditioned generative models and 83% percent more legible than generative models without goal conditioning.
Cite
Text
Bronars and Xu. "Legible Robot Motion from Conditional Generative Models." ICML 2023 Workshops: ILHF, 2023.Markdown
[Bronars and Xu. "Legible Robot Motion from Conditional Generative Models." ICML 2023 Workshops: ILHF, 2023.](https://mlanthology.org/icmlw/2023/bronars2023icmlw-legible/)BibTeX
@inproceedings{bronars2023icmlw-legible,
title = {{Legible Robot Motion from Conditional Generative Models}},
author = {Bronars, Matthew and Xu, Danfei},
booktitle = {ICML 2023 Workshops: ILHF},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/bronars2023icmlw-legible/}
}