EdgeConnect: Structure Guided Image Inpainting Using Edge Prediction

Abstract

In recent years, many deep learning techniques have been applied to the image inpainting problem: the task of filling incomplete regions of an image. However, these models struggle to recover and/or preserve image structure especially when significant portions of the image are missing. We propose a two-stage model that separates the inpainting problem into structure prediction and image completion. Similar to sketch art, our model first predicts the image structure of the missing region in the form of edge maps. Predicted edge maps are passed to the second stage to guide the inpainting process. We evaluate our model end-to-end over publicly available datasets CelebA, CelebHQ, Places2, and Paris StreetView on images up to a resolution of 512 × 512. We demonstrate that this approach outperforms current state-of-the-art techniques quantitatively and qualitatively.

Cite

Text

Nazeri et al. "EdgeConnect: Structure Guided Image Inpainting Using Edge Prediction." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00408

Markdown

[Nazeri et al. "EdgeConnect: Structure Guided Image Inpainting Using Edge Prediction." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/nazeri2019iccvw-edgeconnect/) doi:10.1109/ICCVW.2019.00408

BibTeX

@inproceedings{nazeri2019iccvw-edgeconnect,
  title     = {{EdgeConnect: Structure Guided Image Inpainting Using Edge Prediction}},
  author    = {Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal Z. and Ebrahimi, Mehran},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2019},
  pages     = {3265-3274},
  doi       = {10.1109/ICCVW.2019.00408},
  url       = {https://mlanthology.org/iccvw/2019/nazeri2019iccvw-edgeconnect/}
}