Style-Based Global Appearance Flow for Virtual Try-on
Abstract
Image-based virtual try-on aims to fit an in-shop garment into a clothed person image. To achieve this, a key step is garment warping which spatially aligns the target garment with the corresponding body parts in the person image. Prior methods typically adopt a local appearance flow estimation model. They are thus intrinsically susceptible to difficult body poses/occlusions and large mis-alignments between person and garment images. To overcome this limitation, a novel global appearance flow estimation model is proposed in this work. For the first time, a StyleGAN based architecture is adopted for appearance flow estimation. This enables us to take advantage of a global style vector to encode a whole-image context to cope with the aforementioned challenges. To guide the StyleGAN flow generator to pay more attention to local garment deformation, a flow refinement module is introduced to add local context. Experiment results on a popular virtual try-on benchmark show that our method achieves new state-of-the-art performance. It is particularly effective in a 'in-the-wild' application scenario where the reference image is full-body resulting in a large mis-alignment with the garment image.
Cite
Text
He et al. "Style-Based Global Appearance Flow for Virtual Try-on." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00346Markdown
[He et al. "Style-Based Global Appearance Flow for Virtual Try-on." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/he2022cvpr-stylebased/) doi:10.1109/CVPR52688.2022.00346BibTeX
@inproceedings{he2022cvpr-stylebased,
title = {{Style-Based Global Appearance Flow for Virtual Try-on}},
author = {He, Sen and Song, Yi-Zhe and Xiang, Tao},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {3470-3479},
doi = {10.1109/CVPR52688.2022.00346},
url = {https://mlanthology.org/cvpr/2022/he2022cvpr-stylebased/}
}