Disentangled Cycle Consistency for Highly-Realistic Virtual Try-on
Abstract
Image virtual try-on replaces the clothes on a person image with a desired in-shop clothes image. It is challenging because the person and the in-shop clothes are unpaired. Existing methods formulate virtual try-on as either in-painting or cycle consistency. Both of these two formulations encourage the generation networks to reconstruct the input image in a self-supervised manner. However, existing methods do not differentiate clothing and non-clothing regions. A straightforward generation impedes the virtual try-on quality because of the heavily coupled image contents. In this paper, we propose a Disentangled Cycle-consistency Try-On Network (DCTON). The DCTON is able to produce highly-realistic try-on images by disentangling important components of virtual try-on including clothes warping, skin synthesis, and image composition. Moreover, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning. Extensive experiments on challenging benchmarks show that DCTON outperforms state-of-the-art approaches favorably.
Cite
Text
Ge et al. "Disentangled Cycle Consistency for Highly-Realistic Virtual Try-on." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01665Markdown
[Ge et al. "Disentangled Cycle Consistency for Highly-Realistic Virtual Try-on." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/ge2021cvpr-disentangled/) doi:10.1109/CVPR46437.2021.01665BibTeX
@inproceedings{ge2021cvpr-disentangled,
title = {{Disentangled Cycle Consistency for Highly-Realistic Virtual Try-on}},
author = {Ge, Chongjian and Song, Yibing and Ge, Yuying and Yang, Han and Liu, Wei and Luo, Ping},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {16928-16937},
doi = {10.1109/CVPR46437.2021.01665},
url = {https://mlanthology.org/cvpr/2021/ge2021cvpr-disentangled/}
}