Equivariant Motion Manifold Primitives
Abstract
Existing movement primitive models for the most part focus on representing and generating a single trajectory for a given task, limiting their adaptability to situations in which unforeseen obstacles or new constraints may arise. In this work we propose Motion Manifold Primitives (MMP), a movement primitive paradigm that encodes and generates, for a given task, a continuous manifold of trajectories each of which can achieve the given task. To address the challenge of learning each motion manifold from a limited amount of data, we exploit inherent symmetries in the robot task by constructing motion manifold primitives that are equivariant with respect to given symmetry groups. Under the assumption that each of the MMPs can be smoothly deformed into each other, an autoencoder framework is developed to encode the MMPs and also generate solution trajectories. Experiments involving synthetic and real-robot examples demonstrate that our method outperforms existing manifold primitive methods by significant margins. Code is available at https://github.com/dlsfldl/EMMP-public.
Cite
Text
Lee et al. "Equivariant Motion Manifold Primitives." Conference on Robot Learning, 2023.Markdown
[Lee et al. "Equivariant Motion Manifold Primitives." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/lee2023corl-equivariant/)BibTeX
@inproceedings{lee2023corl-equivariant,
title = {{Equivariant Motion Manifold Primitives}},
author = {Lee, Byeongho and Lee, Yonghyeon and Kim, Seungyeon and Son, MinJun and Park, Frank C.},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {1199-1221},
volume = {229},
url = {https://mlanthology.org/corl/2023/lee2023corl-equivariant/}
}