Pose Guided Person Image Generation

Abstract

This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG$^2$ utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128$\times$64 re-identification images and 256$\times$256 fashion photos show that our model generates high-quality person images with convincing details.

Cite

Text

Ma et al. "Pose Guided Person Image Generation." Neural Information Processing Systems, 2017.

Markdown

[Ma et al. "Pose Guided Person Image Generation." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/ma2017neurips-pose/)

BibTeX

@inproceedings{ma2017neurips-pose,
  title     = {{Pose Guided Person Image Generation}},
  author    = {Ma, Liqian and Jia, Xu and Sun, Qianru and Schiele, Bernt and Tuytelaars, Tinne and Van Gool, Luc},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {406-416},
  url       = {https://mlanthology.org/neurips/2017/ma2017neurips-pose/}
}