Edge Guided Progressively Generative Image Outpainting

Abstract

Deep-learning based generative models are proven to be capable for achieving excellent results in numerous image processing tasks with a wide range of applications. One significant improvement of deep-learning approaches compared to traditional approaches is their ability to regenerate semantically coherent images by only relying on an input with limited information. This advantage becomes even more crucial when the input size is only a very minor proportion of the output size. Such image expansion tasks can be more challenging as the missing area may originally contain many semantic features that are critical in judging the quality of an image. In this paper we propose an edge-guided generative network model for producing semantically consistent output from a small image input. Our experiments show the proposed network is able to regenerate high quality images even when some structural features are missing in the input.

Cite

Text

Lin et al. "Edge Guided Progressively Generative Image Outpainting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00090

Markdown

[Lin et al. "Edge Guided Progressively Generative Image Outpainting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/lin2021cvprw-edge/) doi:10.1109/CVPRW53098.2021.00090

BibTeX

@inproceedings{lin2021cvprw-edge,
  title     = {{Edge Guided Progressively Generative Image Outpainting}},
  author    = {Lin, Han and Pagnucco, Maurice and Song, Yang},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {806-815},
  doi       = {10.1109/CVPRW53098.2021.00090},
  url       = {https://mlanthology.org/cvprw/2021/lin2021cvprw-edge/}
}