FabricFlowNet: Bimanual Cloth Manipulation with a Flow-Based Policy
Abstract
We address the problem of goal-directed cloth manipulation, a challenging task due to the deformability of cloth. Our insight is that optical flow, a technique normally used for motion estimation in video, can also provide an effective representation for corresponding cloth poses across observation and goal images. We introduce FabricFlowNet (FFN), a cloth manipulation policy that leverages flow as both an input and as an action representation to improve performance. FabricFlowNet also elegantly switches between bimanual and single-arm actions based on the desired goal. We show that FabricFlowNet significantly outperforms state-of-the-art model-free and model-based cloth manipulation policies that take image input. We also present real-world experiments on a bimanual system, demonstrating effective sim-to-real transfer. Finally, we show that our method generalizes when trained on a single square cloth to other cloth shapes, such as T-shirts and rectangular cloths. Video and other supplementary materials are available at: https://sites.google.com/view/fabricflownet.
Cite
Text
Weng et al. "FabricFlowNet: Bimanual Cloth Manipulation with a Flow-Based Policy." Conference on Robot Learning, 2021.Markdown
[Weng et al. "FabricFlowNet: Bimanual Cloth Manipulation with a Flow-Based Policy." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/weng2021corl-fabricflownet/)BibTeX
@inproceedings{weng2021corl-fabricflownet,
title = {{FabricFlowNet: Bimanual Cloth Manipulation with a Flow-Based Policy}},
author = {Weng, Thomas and Bajracharya, Sujay Man and Wang, Yufei and Agrawal, Khush and Held, David},
booktitle = {Conference on Robot Learning},
year = {2021},
pages = {192-202},
volume = {164},
url = {https://mlanthology.org/corl/2021/weng2021corl-fabricflownet/}
}