Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions
Abstract
We propose a novel algorithm, named Open-Edit, which is the first attempt on open-domain image manipulation with open-vocabulary instructions. It is a challenging task considering the large variation of image domains and the lack of training supervision. Our approach takes advantage of the unified visual-semantic embedding space pretrained on a general image-caption dataset, and manipulates the embedded visual features by applying text-guided vector arithmetic on the image feature maps. A structure-preserving image decoder then generates the manipulated images from the manipulated feature maps. We further propose an on-the-fly sample-specific optimization approach with cycle-consistency constraints to regularize the manipulated images and force them to preserve details of the source images. Our approach shows promising results in manipulating open-vocabulary color, texture, and high-level attributes for various scenarios of open-domain images.
Cite
Text
Liu et al. "Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58621-8_6Markdown
[Liu et al. "Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/liu2020eccv-openedit/) doi:10.1007/978-3-030-58621-8_6BibTeX
@inproceedings{liu2020eccv-openedit,
title = {{Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions}},
author = {Liu, Xihui and Lin, Zhe and Zhang, Jianming and Zhao, Handong and Tran, Quan and Wang, Xiaogang and Li, Hongsheng},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58621-8_6},
url = {https://mlanthology.org/eccv/2020/liu2020eccv-openedit/}
}