SwapNet: Garment Transfer in Single View Images
Abstract
We present SwapNet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weakly-supervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and analogy pipelines.
Cite
Text
Raj et al. "SwapNet: Garment Transfer in Single View Images." Proceedings of the European Conference on Computer Vision (ECCV), 2018.Markdown
[Raj et al. "SwapNet: Garment Transfer in Single View Images." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/raj2018eccv-swapnet/)BibTeX
@inproceedings{raj2018eccv-swapnet,
title = {{SwapNet: Garment Transfer in Single View Images}},
author = {Raj, Amit and Sangkloy, Patsorn and Chang, Huiwen and Lu, Jingwan and Ceylan, Duygu and Hays, James},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
url = {https://mlanthology.org/eccv/2018/raj2018eccv-swapnet/}
}